Dec 12 04:33:28 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 04:33:28 crc restorecon[4595]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:28 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 12 04:33:29 crc restorecon[4595]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 12 04:33:29 crc kubenswrapper[4796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 04:33:29 crc kubenswrapper[4796]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 04:33:29 crc kubenswrapper[4796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 04:33:29 crc kubenswrapper[4796]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 04:33:29 crc kubenswrapper[4796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 12 04:33:29 crc kubenswrapper[4796]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.229431 4796 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235351 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235385 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235401 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235415 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235427 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235437 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235448 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235458 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235469 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235479 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235489 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235499 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235510 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235521 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235532 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235542 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235552 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235561 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235572 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235582 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235591 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235601 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235612 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235622 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235633 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235643 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235653 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235662 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235682 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235693 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235703 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235712 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235723 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235732 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235742 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235752 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235761 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235771 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235781 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235791 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235801 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235816 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235830 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235844 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235857 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235867 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235877 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235887 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235897 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235905 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235913 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235923 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235934 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235944 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.235952 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236210 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236221 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236230 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236238 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236247 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236255 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236304 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236313 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236320 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236328 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236336 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236345 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236356 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236369 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236379 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.236388 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236536 4796 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236554 4796 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236571 4796 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236609 4796 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236622 4796 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236631 4796 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236644 4796 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236655 4796 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236665 4796 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236675 4796 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236685 4796 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236695 4796 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236704 4796 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236714 4796 flags.go:64] FLAG: --cgroup-root="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236722 4796 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236732 4796 flags.go:64] FLAG: --client-ca-file="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236741 4796 flags.go:64] FLAG: --cloud-config="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236750 4796 flags.go:64] FLAG: --cloud-provider="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236759 4796 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236772 4796 flags.go:64] FLAG: --cluster-domain="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236783 4796 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236795 4796 flags.go:64] FLAG: --config-dir="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236806 4796 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236821 4796 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236838 4796 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236851 4796 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236863 4796 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236876 4796 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236887 4796 flags.go:64] FLAG: --contention-profiling="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236897 4796 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236907 4796 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236920 4796 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236932 4796 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236946 4796 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236958 4796 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236969 4796 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236981 4796 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.236992 4796 flags.go:64] FLAG: --enable-server="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237003 4796 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237016 4796 flags.go:64] FLAG: --event-burst="100" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237025 4796 flags.go:64] FLAG: --event-qps="50" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237036 4796 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237045 4796 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237055 4796 flags.go:64] FLAG: --eviction-hard="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237067 4796 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237076 4796 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237086 4796 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237096 4796 flags.go:64] FLAG: --eviction-soft="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237105 4796 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237114 4796 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237123 4796 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237133 4796 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237141 4796 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237150 4796 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237160 4796 flags.go:64] FLAG: --feature-gates="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237170 4796 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237179 4796 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237189 4796 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237198 4796 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237208 4796 flags.go:64] FLAG: --healthz-port="10248" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237217 4796 flags.go:64] FLAG: --help="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237228 4796 flags.go:64] FLAG: --hostname-override="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237237 4796 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237246 4796 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237255 4796 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237264 4796 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237273 4796 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237331 4796 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237341 4796 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237350 4796 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237359 4796 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237368 4796 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237378 4796 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237387 4796 flags.go:64] FLAG: --kube-reserved="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237396 4796 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237404 4796 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237414 4796 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237422 4796 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237433 4796 flags.go:64] FLAG: --lock-file="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237475 4796 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237488 4796 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237500 4796 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237530 4796 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237540 4796 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237549 4796 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237558 4796 flags.go:64] FLAG: --logging-format="text" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237567 4796 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237576 4796 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237585 4796 flags.go:64] FLAG: --manifest-url="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237595 4796 flags.go:64] FLAG: --manifest-url-header="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237606 4796 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237618 4796 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237629 4796 flags.go:64] FLAG: --max-pods="110" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237639 4796 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237649 4796 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237660 4796 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237671 4796 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237683 4796 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237695 4796 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237706 4796 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237732 4796 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237742 4796 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237751 4796 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237761 4796 flags.go:64] FLAG: --pod-cidr="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237770 4796 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237785 4796 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237794 4796 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237803 4796 flags.go:64] FLAG: --pods-per-core="0" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237812 4796 flags.go:64] FLAG: --port="10250" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237821 4796 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237830 4796 flags.go:64] FLAG: --provider-id="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237839 4796 flags.go:64] FLAG: --qos-reserved="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237848 4796 flags.go:64] FLAG: --read-only-port="10255" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237857 4796 flags.go:64] FLAG: --register-node="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237866 4796 flags.go:64] FLAG: --register-schedulable="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237877 4796 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237892 4796 flags.go:64] FLAG: --registry-burst="10" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237901 4796 flags.go:64] FLAG: --registry-qps="5" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237910 4796 flags.go:64] FLAG: --reserved-cpus="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237919 4796 flags.go:64] FLAG: --reserved-memory="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237942 4796 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237952 4796 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237961 4796 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237970 4796 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237979 4796 flags.go:64] FLAG: --runonce="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237988 4796 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.237997 4796 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238007 4796 flags.go:64] FLAG: --seccomp-default="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238016 4796 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238025 4796 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238035 4796 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238044 4796 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238053 4796 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238062 4796 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238073 4796 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238085 4796 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238096 4796 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238108 4796 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238119 4796 flags.go:64] FLAG: --system-cgroups="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238132 4796 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238148 4796 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238157 4796 flags.go:64] FLAG: --tls-cert-file="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238165 4796 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238178 4796 flags.go:64] FLAG: --tls-min-version="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238187 4796 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238196 4796 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238205 4796 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238214 4796 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238223 4796 flags.go:64] FLAG: --v="2" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238235 4796 flags.go:64] FLAG: --version="false" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238246 4796 flags.go:64] FLAG: --vmodule="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238258 4796 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.238269 4796 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238515 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238530 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238541 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238551 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238560 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238568 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238576 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238584 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238592 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238601 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238608 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238619 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238629 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238638 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238647 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238655 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238663 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238671 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238679 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238687 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238695 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238703 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238713 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238722 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238732 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238741 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238750 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238761 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238770 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238780 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238790 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238800 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238809 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238819 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238831 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238842 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238852 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238861 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238870 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238880 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238890 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238899 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238912 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238923 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238932 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238943 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238953 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238962 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238972 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238983 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.238993 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239002 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239010 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239018 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239026 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239036 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239046 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239056 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239066 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239076 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239086 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239096 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239105 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239113 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239121 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239128 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239136 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239143 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239151 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239159 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.239168 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.239182 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.252026 4796 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.252081 4796 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252234 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252306 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252317 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252329 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252338 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252347 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252358 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252371 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252382 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252391 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252399 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252409 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252418 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252427 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252435 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252445 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252453 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252461 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252469 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252508 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252516 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252524 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252531 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252540 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252550 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252560 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252571 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252582 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252592 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252602 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252613 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252625 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252634 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252642 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252652 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252660 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252668 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252676 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252683 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252691 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252699 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252707 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252714 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252722 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252729 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252737 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252745 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252752 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252760 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252767 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252775 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252782 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252790 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252798 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252806 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252814 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252821 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252832 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252842 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252850 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252892 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252902 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252910 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252917 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252925 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252935 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252945 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252954 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252962 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252970 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.252982 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.252997 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253224 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253242 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253253 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253263 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253273 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253310 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253319 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253328 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253338 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253348 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253358 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253368 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253377 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253387 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253397 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253406 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253416 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253424 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253432 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253440 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253447 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253458 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253466 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253473 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253481 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253489 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253499 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253507 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253515 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253522 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253530 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253538 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253546 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253554 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253564 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253572 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253613 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253622 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253630 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253638 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253645 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253654 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253661 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253669 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253676 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253687 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253697 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253705 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253713 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253721 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253729 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253736 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253744 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253752 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253760 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253767 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253775 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253783 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253790 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253801 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253811 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253822 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253830 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253839 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253847 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253854 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253863 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253873 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253882 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253891 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.253902 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.253916 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.254443 4796 server.go:940] "Client rotation is on, will bootstrap in background" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.258777 4796 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.258918 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.260551 4796 server.go:997] "Starting client certificate rotation" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.260600 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.260880 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 13:10:29.954613341 +0000 UTC Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.261036 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.268619 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.270476 4796 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.271766 4796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.284080 4796 log.go:25] "Validated CRI v1 runtime API" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.305058 4796 log.go:25] "Validated CRI v1 image API" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.307553 4796 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.310587 4796 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-12-04-27-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.310629 4796 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.323714 4796 manager.go:217] Machine: {Timestamp:2025-12-12 04:33:29.322664338 +0000 UTC m=+0.198681525 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199472640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6e5a9b12-f1e8-4509-b376-8d2a837dae47 BootID:29962b12-6b98-48df-a7ea-a35f82b5869e Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076107 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599734272 Type:vfs Inodes:3076107 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9c:df:de Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9c:df:de Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0e:1a:47 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:25:d9:c7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:48:24 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2f:b6:d7 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:96:b6:57:e9:4c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4a:13:60:d2:a2:3a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199472640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.323944 4796 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.324127 4796 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.324841 4796 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.325044 4796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.325387 4796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.325704 4796 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.325721 4796 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.325948 4796 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.325974 4796 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.326198 4796 state_mem.go:36] "Initialized new in-memory state store" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.326335 4796 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.326952 4796 kubelet.go:418] "Attempting to sync node with API server" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.326976 4796 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.327009 4796 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.327071 4796 kubelet.go:324] "Adding apiserver pod source" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.327086 4796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.329431 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.329794 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.329487 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.329829 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.330299 4796 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.330727 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.331347 4796 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.331992 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332096 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332172 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332238 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332329 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332399 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332474 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332547 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332614 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332679 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332748 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.332812 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.333083 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.333579 4796 server.go:1280] "Started kubelet" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.334080 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.334417 4796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.334561 4796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 04:33:29 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.336856 4796 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.336995 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.337019 4796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.337577 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:35:57.02353077 +0000 UTC Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.337637 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 127h2m27.685898223s for next certificate rotation Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.338084 4796 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.338102 4796 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.338260 4796 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.338158 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18805da32c4d56ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 04:33:29.333552814 +0000 UTC m=+0.209569971,LastTimestamp:2025-12-12 04:33:29.333552814 +0000 UTC m=+0.209569971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.338712 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.339392 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.339647 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.340433 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.340669 4796 server.go:460] "Adding debug handlers to kubelet server" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.341585 4796 factory.go:55] Registering systemd factory Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.341735 4796 factory.go:221] Registration of the systemd container factory successfully Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.343147 4796 factory.go:153] Registering CRI-O factory Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.343192 4796 factory.go:221] Registration of the crio container factory successfully Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.343360 4796 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.343421 4796 factory.go:103] Registering Raw factory Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.343463 4796 manager.go:1196] Started watching for new ooms in manager Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.350986 4796 manager.go:319] Starting recovery of all containers Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.358957 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359039 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359065 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359086 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359107 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359127 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359147 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359167 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359245 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359266 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359353 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359379 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359409 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359435 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359459 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359485 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359510 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359530 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359574 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359595 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359614 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359633 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359655 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359677 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359695 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359719 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359741 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359763 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359791 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359817 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359840 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359860 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359878 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359899 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359919 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359940 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359959 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359979 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.359998 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360023 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360043 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360122 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360143 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360164 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360183 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360202 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360221 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360240 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360261 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360312 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360335 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360363 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360383 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360405 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360426 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360447 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360466 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360486 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360505 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360524 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360543 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360561 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.360579 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363660 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363702 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363724 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363744 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363764 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363786 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363804 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363826 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363848 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363870 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363890 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363911 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363930 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363951 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363971 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.363992 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364010 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364029 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364049 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364067 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364088 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364107 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364127 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364146 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364166 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364185 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364204 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364222 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364241 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364260 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364353 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364373 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364393 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364411 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364429 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364449 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364469 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364488 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364508 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364527 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364565 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364587 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364608 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364629 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364652 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364672 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364695 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364716 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364736 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364757 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364777 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364797 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364817 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364836 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364854 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364872 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364894 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364913 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364933 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364953 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.364988 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365007 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365026 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365046 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365065 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365086 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365105 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365124 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365144 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365164 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365182 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365202 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365224 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365244 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365264 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365307 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365328 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365368 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365388 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365413 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365439 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365458 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365478 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365497 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365516 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365535 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365553 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365573 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365594 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365614 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365634 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365654 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365674 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365692 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365713 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365732 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365750 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.365770 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.367224 4796 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.367274 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.367324 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368116 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368156 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368190 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368251 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368315 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368352 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368383 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368411 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368436 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368462 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368481 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368500 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368520 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368542 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368563 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368585 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368606 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368627 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368648 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368668 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368690 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368711 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368732 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368751 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368770 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368841 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368861 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368884 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368956 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.368979 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371382 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371415 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371438 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371462 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371482 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371502 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371523 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371545 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371570 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371593 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371614 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371633 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371653 4796 reconstruct.go:97] "Volume reconstruction finished" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.371670 4796 reconciler.go:26] "Reconciler: start to sync state" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.375771 4796 manager.go:324] Recovery completed Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.394156 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.395858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.395975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.396053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.397159 4796 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.397182 4796 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.397202 4796 state_mem.go:36] "Initialized new in-memory state store" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.406220 4796 policy_none.go:49] "None policy: Start" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.407025 4796 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.407141 4796 state_mem.go:35] "Initializing new in-memory state store" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.407427 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.409981 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.410018 4796 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.410045 4796 kubelet.go:2335] "Starting kubelet main sync loop" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.410093 4796 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.413724 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.414473 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.439130 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469151 4796 manager.go:334] "Starting Device Plugin manager" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469193 4796 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469206 4796 server.go:79] "Starting device plugin registration server" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469584 4796 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469607 4796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469750 4796 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469844 4796 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.469854 4796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.477318 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.510415 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.510496 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.511415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.511443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.511452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.511579 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.511820 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.511875 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512450 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512615 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512645 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.512820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513333 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513489 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513544 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.513938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514008 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514123 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514191 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514828 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.514857 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.515333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.515360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.515373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.515361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.515396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.515405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.540345 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.569771 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.570992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.571034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.571051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.571111 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.571609 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573409 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573426 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573445 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573459 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573487 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573515 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573528 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.573884 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.574025 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.574175 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.574250 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.574338 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676085 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676120 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676156 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676218 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676247 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676368 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676269 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676615 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676709 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676775 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676830 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.676953 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.677001 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.677140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.772798 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.774688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.774763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.774788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.774833 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.775525 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.836957 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.846814 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.872214 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.882056 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2e61316ab22305f77aca77a28e539d5407dfb84ea513f8cfe0056d33532609a6 WatchSource:0}: Error finding container 2e61316ab22305f77aca77a28e539d5407dfb84ea513f8cfe0056d33532609a6: Status 404 returned error can't find the container with id 2e61316ab22305f77aca77a28e539d5407dfb84ea513f8cfe0056d33532609a6 Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.884502 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b6fd3f967d97cf7d259705771071fc3e0ad9d711deafbc22a7e830a32724593a WatchSource:0}: Error finding container b6fd3f967d97cf7d259705771071fc3e0ad9d711deafbc22a7e830a32724593a: Status 404 returned error can't find the container with id b6fd3f967d97cf7d259705771071fc3e0ad9d711deafbc22a7e830a32724593a Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.893894 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4484b9f24f403b89d492bbc828649ff68170a66a84be9f515fa0fdd6b14d812a WatchSource:0}: Error finding container 4484b9f24f403b89d492bbc828649ff68170a66a84be9f515fa0fdd6b14d812a: Status 404 returned error can't find the container with id 4484b9f24f403b89d492bbc828649ff68170a66a84be9f515fa0fdd6b14d812a Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.900915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: I1212 04:33:29.909529 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.927929 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f6f1888ef98228ef3280ed7743f02ea2794378f58c8eec08872e45a45fc76be0 WatchSource:0}: Error finding container f6f1888ef98228ef3280ed7743f02ea2794378f58c8eec08872e45a45fc76be0: Status 404 returned error can't find the container with id f6f1888ef98228ef3280ed7743f02ea2794378f58c8eec08872e45a45fc76be0 Dec 12 04:33:29 crc kubenswrapper[4796]: W1212 04:33:29.937482 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-eef9dbf3d1ad59d63e2f8ab16a6981fadcdfbfb25e41baa0dee73b5d189f8c4a WatchSource:0}: Error finding container eef9dbf3d1ad59d63e2f8ab16a6981fadcdfbfb25e41baa0dee73b5d189f8c4a: Status 404 returned error can't find the container with id eef9dbf3d1ad59d63e2f8ab16a6981fadcdfbfb25e41baa0dee73b5d189f8c4a Dec 12 04:33:29 crc kubenswrapper[4796]: E1212 04:33:29.941129 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.176510 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.178730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.178773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.178785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.178814 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.179300 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.334940 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:30 crc kubenswrapper[4796]: W1212 04:33:30.392611 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.392679 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.418261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.418425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6f1888ef98228ef3280ed7743f02ea2794378f58c8eec08872e45a45fc76be0"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.420942 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db" exitCode=0 Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.421056 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.421099 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4484b9f24f403b89d492bbc828649ff68170a66a84be9f515fa0fdd6b14d812a"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.421235 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.422837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.422880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.422895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.423268 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3" exitCode=0 Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.423362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.423393 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e61316ab22305f77aca77a28e539d5407dfb84ea513f8cfe0056d33532609a6"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.423498 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.424484 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.424510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.424700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.424712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425311 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e04f4e47f39a96e38216c4c21009aa831c06bdcb8b1ac74193008736ec65822a" exitCode=0 Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e04f4e47f39a96e38216c4c21009aa831c06bdcb8b1ac74193008736ec65822a"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b6fd3f967d97cf7d259705771071fc3e0ad9d711deafbc22a7e830a32724593a"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425445 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.425598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.426491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.426512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.426525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.427908 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d" exitCode=0 Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.427962 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.428244 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eef9dbf3d1ad59d63e2f8ab16a6981fadcdfbfb25e41baa0dee73b5d189f8c4a"} Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.428253 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.429426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.429453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.429463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: W1212 04:33:30.598666 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.598783 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:30 crc kubenswrapper[4796]: W1212 04:33:30.624635 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.624703 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.742254 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Dec 12 04:33:30 crc kubenswrapper[4796]: W1212 04:33:30.777647 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.777858 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.980104 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.987074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.987110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.987138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:30 crc kubenswrapper[4796]: I1212 04:33:30.987172 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 04:33:30 crc kubenswrapper[4796]: E1212 04:33:30.989908 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.277554 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.432302 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0f205f60c288a9ebadb4bd0c1437ed7944b0119e487277f585a6ee560c2d86ab"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.432685 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.436354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.436396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.436408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.438223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.438257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.438289 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.438374 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.439060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.439082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.439092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441217 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441255 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441246 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441305 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.441870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.444509 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.444535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.444544 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.444554 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.444563 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.444622 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.445182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.445205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.445219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.446617 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a" exitCode=0 Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.446643 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a"} Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.446707 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.447226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.447243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.447251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:31 crc kubenswrapper[4796]: I1212 04:33:31.936321 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.318117 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.454153 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8" exitCode=0 Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.454288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8"} Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.454469 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.454483 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.454544 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.454916 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.456850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.590063 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.591603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.591651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.591668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.591705 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.700971 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:32 crc kubenswrapper[4796]: I1212 04:33:32.710258 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.451385 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.451492 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.451524 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.452263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.452314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.452325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.461416 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428"} Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.461449 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d"} Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.461468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9"} Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.461454 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.462221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.462239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.462247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:33 crc kubenswrapper[4796]: I1212 04:33:33.509175 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.470919 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034"} Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.470988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca"} Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.471083 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.472012 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.473379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.473437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.473459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.473701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.473843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.473968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.937059 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 04:33:34 crc kubenswrapper[4796]: I1212 04:33:34.937358 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.146724 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.147419 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.147627 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.151002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.151061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.151079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.474649 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.475442 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.476220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.476515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.476734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.477561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.477602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.477618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.899835 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.900095 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.902250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.902373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:35 crc kubenswrapper[4796]: I1212 04:33:35.902393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:36 crc kubenswrapper[4796]: I1212 04:33:36.973100 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 12 04:33:36 crc kubenswrapper[4796]: I1212 04:33:36.973380 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:36 crc kubenswrapper[4796]: I1212 04:33:36.975244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:36 crc kubenswrapper[4796]: I1212 04:33:36.975326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:36 crc kubenswrapper[4796]: I1212 04:33:36.975351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:39 crc kubenswrapper[4796]: E1212 04:33:39.477441 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.537880 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.538059 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.539415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.539468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.539477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.729887 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.730500 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.732560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.732628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:39 crc kubenswrapper[4796]: I1212 04:33:39.732646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:41 crc kubenswrapper[4796]: E1212 04:33:41.279136 4796 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 12 04:33:41 crc kubenswrapper[4796]: I1212 04:33:41.336060 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 04:33:42 crc kubenswrapper[4796]: I1212 04:33:42.184806 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 04:33:42 crc kubenswrapper[4796]: I1212 04:33:42.184914 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 04:33:42 crc kubenswrapper[4796]: I1212 04:33:42.189516 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 04:33:42 crc kubenswrapper[4796]: I1212 04:33:42.189566 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.457248 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.457942 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.459119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.459191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.459219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.463558 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.504266 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.505055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.505081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.505090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.513772 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.513857 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.514570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.514591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:43 crc kubenswrapper[4796]: I1212 04:33:43.514599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:44 crc kubenswrapper[4796]: I1212 04:33:44.938018 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 04:33:44 crc kubenswrapper[4796]: I1212 04:33:44.938133 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 12 04:33:45 crc kubenswrapper[4796]: I1212 04:33:45.312466 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 12 04:33:45 crc kubenswrapper[4796]: I1212 04:33:45.330079 4796 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.176832 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.179796 4796 trace.go:236] Trace[382799182]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 04:33:33.820) (total time: 13358ms): Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[382799182]: ---"Objects listed" error: 13358ms (04:33:47.179) Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[382799182]: [13.358746748s] [13.358746748s] END Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.179839 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.181040 4796 trace.go:236] Trace[2113311144]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 04:33:33.439) (total time: 13741ms): Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[2113311144]: ---"Objects listed" error: 13741ms (04:33:47.180) Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[2113311144]: [13.741445039s] [13.741445039s] END Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.181064 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.181094 4796 trace.go:236] Trace[1644617769]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 04:33:32.996) (total time: 14184ms): Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[1644617769]: ---"Objects listed" error: 14184ms (04:33:47.180) Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[1644617769]: [14.184347914s] [14.184347914s] END Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.181132 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.183436 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.184876 4796 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.185157 4796 trace.go:236] Trace[921378824]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 04:33:33.148) (total time: 14036ms): Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[921378824]: ---"Objects listed" error: 14036ms (04:33:47.185) Dec 12 04:33:47 crc kubenswrapper[4796]: Trace[921378824]: [14.036900254s] [14.036900254s] END Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.185182 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.313763 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38894->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.313847 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38894->192.168.126.11:17697: read: connection reset by peer" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.315973 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.316002 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.316195 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.316374 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.337058 4796 apiserver.go:52] "Watching apiserver" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.342184 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.342689 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.343566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.343751 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.343834 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.344027 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.344153 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.343827 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.344589 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.344631 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.348129 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.348140 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.348439 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.348555 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.349950 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.350665 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.350921 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.351410 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.351731 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.353351 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.382688 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.409638 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.425718 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.439526 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.439602 4796 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.453219 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.471035 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.487620 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488505 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488595 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488705 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488786 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.488987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489051 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489135 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489289 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489361 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489431 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489492 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489558 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489630 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.489768 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490131 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490250 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490548 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.491150 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490856 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490882 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.490901 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.491066 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.491384 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.491585 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.491615 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.491886 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.492133 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.492228 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493060 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493153 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493217 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493302 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493456 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493531 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493601 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493664 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493795 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493856 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494035 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494117 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494205 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494434 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.493019 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494502 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494693 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495105 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.494517 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495579 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495650 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495673 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495694 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495718 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495745 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495770 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495812 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495881 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495887 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495934 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495959 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.495984 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496009 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496031 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496052 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496093 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496114 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496136 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496137 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496201 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496252 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496272 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496323 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496369 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496388 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496408 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496425 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496444 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496480 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496496 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496538 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496554 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496573 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496592 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496611 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496628 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496661 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496679 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496695 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496712 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496746 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496764 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496781 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496798 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496815 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496835 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496852 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496880 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496918 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496934 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496954 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496969 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496985 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497003 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497021 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497037 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497055 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497072 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497126 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497145 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497162 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497179 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497197 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497217 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497238 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497255 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497295 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497312 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497330 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497349 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497367 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497386 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497422 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497441 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497459 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497477 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497495 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497514 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497530 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497546 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497563 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497581 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497615 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497633 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497668 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497687 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497720 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497739 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497757 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497775 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497826 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497841 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497859 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497877 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497893 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497908 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497924 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498017 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498035 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498069 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498087 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498155 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498189 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498245 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498263 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498299 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498316 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498331 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498348 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498397 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498414 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500853 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500902 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.500923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501014 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501052 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501072 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501092 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505169 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505238 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505535 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505562 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505630 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505691 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505752 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505774 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505805 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.505838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506079 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506106 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506126 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506141 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506157 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506170 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506188 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506202 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506216 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506234 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506254 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506268 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506331 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506351 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506365 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506378 4796 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506422 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506426 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506479 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506494 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506507 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506520 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506538 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506555 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506588 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506606 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506620 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506633 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496343 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496495 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496562 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.496747 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497029 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497062 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497327 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.509643 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497717 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.497928 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.498273 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501558 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.501842 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.502079 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.502153 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.502338 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.502414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.502681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.502825 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.503479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.503577 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.503606 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.504458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.504615 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506237 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.506803 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.507177 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.507187 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:33:48.007162327 +0000 UTC m=+18.883179474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.507461 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.507713 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.507773 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.507974 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.508602 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.508925 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.504699 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.508989 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.509151 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.509541 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.509659 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.509988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.510196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.510662 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.510947 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.511023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.511439 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.511616 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.511936 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.512180 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.513044 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.514407 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.514639 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.515675 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.516233 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.516560 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.516569 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.516615 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.516829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.517388 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.517504 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.517961 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.518156 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.518652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.518946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.519167 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.519479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.519542 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.519779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.519953 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.520202 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.520427 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.520528 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.520575 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.520889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.520939 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.521188 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.521310 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.521355 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.521874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.521999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.522389 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.522543 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.522726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.522960 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.522963 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.523120 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.523520 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.523553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.523591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.523728 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.523940 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524105 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524421 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524888 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.525022 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.525101 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.525230 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.525404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.525755 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.525982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.526193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.526205 4796 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.526493 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.526534 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.527076 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.527056 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.527189 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.527410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.527527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.527791 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.528079 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.524553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.529050 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.529375 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.529636 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.529760 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.529968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.530188 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.530477 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.530617 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.531032 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.531081 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.531529 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.531670 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.531691 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.532198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.532595 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.533032 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.533313 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.534396 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.534393 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.534736 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.534817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.534990 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.535340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.535363 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.535492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.536048 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.536184 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.536582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.536641 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.536879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.537156 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.537190 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.537250 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.537272 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.538008 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.538089 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.538393 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.538736 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.538803 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.536509 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.539838 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.540114 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.540155 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.540724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.540812 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.540879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.541202 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.541212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.541307 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.541387 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.541813 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.541915 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:48.041891479 +0000 UTC m=+18.917908616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.542044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.542403 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.542586 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:48.042548629 +0000 UTC m=+18.918565776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.542669 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.542726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.547849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.551068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.554628 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.556380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.560755 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.560793 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.560811 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.560894 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:48.060869404 +0000 UTC m=+18.936886551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.561484 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.561775 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.563778 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.563800 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.563814 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:47 crc kubenswrapper[4796]: E1212 04:33:47.563869 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:48.063850186 +0000 UTC m=+18.939867333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.563990 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1" exitCode=255 Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.564074 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1"} Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.565691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.567605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.586139 4796 scope.go:117] "RemoveContainer" containerID="bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.588181 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.590159 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.591045 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.594671 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.607714 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608248 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608637 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608672 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608686 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608697 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608709 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608718 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608728 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608753 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608764 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608775 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608788 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608798 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608807 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608832 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608842 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608851 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608860 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608870 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608879 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608913 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608924 4796 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608937 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.608947 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609006 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609017 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609030 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609039 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609048 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609058 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609084 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609093 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609103 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609113 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609122 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609131 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609158 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609169 4796 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609180 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609190 4796 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609201 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609214 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609237 4796 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609247 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609258 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609270 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609304 4796 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609316 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609325 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609338 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609348 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609358 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609383 4796 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609394 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609406 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609415 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609426 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609435 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609461 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609470 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609482 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609491 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609503 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609515 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609540 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609550 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609559 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609570 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609580 4796 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609588 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609612 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609624 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609633 4796 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609642 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609658 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609667 4796 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609703 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609717 4796 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609728 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609738 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609747 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609771 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609781 4796 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609791 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609800 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609808 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609817 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609827 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609849 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609859 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609868 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609878 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609889 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609898 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609921 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609936 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609946 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609955 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609965 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609974 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.609983 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610006 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610015 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610025 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610035 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610045 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610054 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610077 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610087 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610096 4796 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610106 4796 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610115 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610125 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610134 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610161 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610171 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610179 4796 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610189 4796 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610197 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610206 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610215 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610313 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610325 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610336 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610345 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610379 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610387 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610397 4796 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610406 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610416 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610510 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610522 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610533 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610545 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610554 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610580 4796 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610590 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610601 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610610 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610619 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610627 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610638 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610668 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610677 4796 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610687 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610696 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610704 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610713 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610736 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610745 4796 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610754 4796 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610763 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610772 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610781 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610790 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610813 4796 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610823 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610833 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610842 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610851 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610860 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610890 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610903 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610912 4796 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610921 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610931 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610940 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.610949 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.619493 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.631476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.640920 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.651330 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.667844 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.679574 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 12 04:33:47 crc kubenswrapper[4796]: I1212 04:33:47.694479 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 12 04:33:47 crc kubenswrapper[4796]: W1212 04:33:47.711847 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a0e0aab218d5119123a2f5ca3c29721363478126e42325a85bb5ec89d18ef53e WatchSource:0}: Error finding container a0e0aab218d5119123a2f5ca3c29721363478126e42325a85bb5ec89d18ef53e: Status 404 returned error can't find the container with id a0e0aab218d5119123a2f5ca3c29721363478126e42325a85bb5ec89d18ef53e Dec 12 04:33:47 crc kubenswrapper[4796]: W1212 04:33:47.724140 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-87fb17b066940a733fe6d1e980299ff328f019d673638fab9b853ed2386d5563 WatchSource:0}: Error finding container 87fb17b066940a733fe6d1e980299ff328f019d673638fab9b853ed2386d5563: Status 404 returned error can't find the container with id 87fb17b066940a733fe6d1e980299ff328f019d673638fab9b853ed2386d5563 Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.005482 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.012857 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.013058 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:33:49.013037651 +0000 UTC m=+19.889054798 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.114361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.114584 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.114510 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.114683 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:49.114664625 +0000 UTC m=+19.990681772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.114858 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.114894 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.114905 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.114967 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:49.114947254 +0000 UTC m=+19.990964401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.115094 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.115140 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.115098 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.115214 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.115222 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.115253 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:49.115246413 +0000 UTC m=+19.991263560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.115224 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.115322 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:49.115313466 +0000 UTC m=+19.991330613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.410508 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.410635 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.411018 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:48 crc kubenswrapper[4796]: E1212 04:33:48.411081 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.567737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.567781 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d901c04b5f2b552bfe91f3303402f5a713ad96446e3894e821c03ec76f3d8ba3"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.571027 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.571071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.571088 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"87fb17b066940a733fe6d1e980299ff328f019d673638fab9b853ed2386d5563"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.572520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a0e0aab218d5119123a2f5ca3c29721363478126e42325a85bb5ec89d18ef53e"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.574375 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.576002 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5"} Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.576397 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.589250 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.606144 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.621030 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.639867 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.658842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.671338 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.690341 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.709172 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.729906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.743950 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.759577 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.774439 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.800621 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:48 crc kubenswrapper[4796]: I1212 04:33:48.813861 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.022929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.023131 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:33:51.023102917 +0000 UTC m=+21.899120064 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.124325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.124363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.124387 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.124407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124490 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124548 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124565 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124566 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:51.124548056 +0000 UTC m=+22.000565203 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124577 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124590 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124628 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124641 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:51.124609958 +0000 UTC m=+22.000627095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124645 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124687 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124710 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:51.12469041 +0000 UTC m=+22.000707567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.124731 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:51.124722501 +0000 UTC m=+22.000739668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.410963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:49 crc kubenswrapper[4796]: E1212 04:33:49.411163 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.415664 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.417405 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.419934 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.421523 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.423856 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.424868 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.426127 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.427700 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.428029 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.428413 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.429035 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.429558 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.430221 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.430758 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.431258 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.431772 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.432303 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.432875 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.433246 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.433794 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.434384 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.434823 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.436782 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.437728 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.439500 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.440226 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.441074 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.441851 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.442369 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.442928 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.443486 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.443959 4796 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.444089 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.447171 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.447728 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.448457 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.448591 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.450209 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.451138 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.452312 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.453111 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.454507 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.455079 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.456420 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.457170 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.458254 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.458787 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.459752 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.460250 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.461627 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.462222 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.462342 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.463297 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.463826 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.464381 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.465609 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.466208 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.483541 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.502319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.526681 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.545004 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.569061 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.583226 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.586745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.587519 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.598493 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.612555 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.626298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.640000 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.653470 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.665898 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.678092 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.691044 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.706849 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.724423 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.737632 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.754300 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.767484 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:49 crc kubenswrapper[4796]: I1212 04:33:49.789906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.383723 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.385746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.385812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.385824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.385909 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.396465 4796 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.396820 4796 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.398109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.398170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.398182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.398199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.398227 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.410527 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.410575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.410669 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.410790 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.447549 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.454766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.454818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.454833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.454855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.454870 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.475745 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.479636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.479674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.479683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.479700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.479709 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.493458 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.499389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.499556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.499613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.499685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.499750 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.513292 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.516378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.516416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.516426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.516439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.516448 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.526642 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: E1212 04:33:50.526980 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.528598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.528688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.528757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.528824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.528892 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.582680 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2"} Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.602815 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.613288 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.624008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.631256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.631308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.631321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.631340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.631352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.635926 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.651068 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.663358 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.675092 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.686976 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.733762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.733809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.733822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.733839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.733849 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.836824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.836918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.836930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.836953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.836964 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.939492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.939523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.939534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.939548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:50 crc kubenswrapper[4796]: I1212 04:33:50.939558 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:50Z","lastTransitionTime":"2025-12-12T04:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.041668 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.041847 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:33:55.041816264 +0000 UTC m=+25.917833451 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.043066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.043106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.043127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.043152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.043171 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.143172 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.143237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.143275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.143346 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143505 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143638 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:55.143608934 +0000 UTC m=+26.019626121 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143661 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143786 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:55.143755549 +0000 UTC m=+26.019772736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143792 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143943 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.143967 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.144028 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:55.144011486 +0000 UTC m=+26.020028673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.144489 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.144647 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.144800 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.144987 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:33:55.144960696 +0000 UTC m=+26.020977853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.145474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.145522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.145537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.145562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.145577 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.248001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.248052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.248063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.248078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.248087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.350343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.350404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.350420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.350443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.350459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.411094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:51 crc kubenswrapper[4796]: E1212 04:33:51.411347 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.437898 4796 csr.go:261] certificate signing request csr-vz6xk is approved, waiting to be issued Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.451178 4796 csr.go:257] certificate signing request csr-vz6xk is issued Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.452935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.452977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.452988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.453004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.453017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.555581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.555615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.555624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.555636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.555645 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.657931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.657966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.657975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.657992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.658000 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.759930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.759955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.759963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.759976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.759984 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.861832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.861869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.861881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.861898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.861908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.940414 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.943306 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.956749 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:51Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.964050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.964078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.964087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.964099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.964110 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:51Z","lastTransitionTime":"2025-12-12T04:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.973512 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:51Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.986851 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:51Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.991942 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xksvx"] Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.992409 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.993948 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-b68x4"] Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.994202 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b68x4" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.998917 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 12 04:33:51 crc kubenswrapper[4796]: I1212 04:33:51.998988 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.002213 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.002270 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.002221 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.002584 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.002590 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.002662 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5zck7"] Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.003105 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mxh7m"] Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.003267 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-996v7"] Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.003301 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.003384 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.004098 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.005144 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.006445 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.006571 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.006630 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.006795 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.010550 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.010622 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.010717 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.010852 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.010884 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.011120 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.011355 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.011472 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.011485 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.011564 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.011504 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.014082 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.029694 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.047414 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.059874 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.066305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.066343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.066356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.066371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.066382 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.069339 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.087817 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.106093 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.137676 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4xz\" (UniqueName: \"kubernetes.io/projected/7b7537ef-8ad8-4901-a2db-1881d2754684-kube-api-access-pp4xz\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154189 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154208 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ldq9\" (UniqueName: \"kubernetes.io/projected/55b96fce-0e56-40cb-ab90-873a8421260b-kube-api-access-4ldq9\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154267 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-kubelet\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154306 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-mcd-auth-proxy-config\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154331 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-systemd\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154347 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-cni-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-cnibin\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-k8s-cni-cncf-io\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154403 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-hostroot\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-cnibin\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154452 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-slash\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154468 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55b96fce-0e56-40cb-ab90-873a8421260b-cni-binary-copy\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-script-lib\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154540 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-ovn-kubernetes\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154569 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-socket-dir-parent\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-system-cni-dir\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154614 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b7537ef-8ad8-4901-a2db-1881d2754684-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154629 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-system-cni-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-os-release\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154680 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-systemd-units\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154703 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-var-lib-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-config\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154738 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-env-overrides\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154770 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-cni-multus\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154787 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-conf-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154804 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-etc-kubernetes\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154820 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-os-release\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55b96fce-0e56-40cb-ab90-873a8421260b-multus-daemon-config\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-cni-bin\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154881 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-log-socket\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfckw\" (UniqueName: \"kubernetes.io/projected/439475ac-7f06-4a47-9a81-9f4cf4083c38-kube-api-access-nfckw\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154910 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqq9\" (UniqueName: \"kubernetes.io/projected/f25c3eac-cf85-400e-be55-e093858a48be-kube-api-access-spqq9\") pod \"node-resolver-xksvx\" (UID: \"f25c3eac-cf85-400e-be55-e093858a48be\") " pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-ovn\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-node-log\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154952 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-bin\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154965 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f25c3eac-cf85-400e-be55-e093858a48be-hosts-file\") pod \"node-resolver-xksvx\" (UID: \"f25c3eac-cf85-400e-be55-e093858a48be\") " pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154978 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-rootfs\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.154994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-kubelet\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155007 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-multus-certs\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b7537ef-8ad8-4901-a2db-1881d2754684-cni-binary-copy\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155035 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-etc-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155048 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-netd\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155063 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovn-node-metrics-cert\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155076 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-netns\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-proxy-tls\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155127 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdtr\" (UniqueName: \"kubernetes.io/projected/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-kube-api-access-trdtr\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.155140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-netns\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.161780 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.167801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.167829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.167853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.167865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.167873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.182817 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.223821 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255268 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4xz\" (UniqueName: \"kubernetes.io/projected/7b7537ef-8ad8-4901-a2db-1881d2754684-kube-api-access-pp4xz\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ldq9\" (UniqueName: \"kubernetes.io/projected/55b96fce-0e56-40cb-ab90-873a8421260b-kube-api-access-4ldq9\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-cni-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-cnibin\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255509 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-kubelet\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-mcd-auth-proxy-config\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255548 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-systemd\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255563 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-slash\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255577 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55b96fce-0e56-40cb-ab90-873a8421260b-cni-binary-copy\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255590 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-k8s-cni-cncf-io\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-hostroot\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255637 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-cnibin\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255655 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-ovn-kubernetes\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255684 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-script-lib\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-system-cni-dir\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b7537ef-8ad8-4901-a2db-1881d2754684-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255734 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255747 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-socket-dir-parent\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255760 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-systemd-units\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255773 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-system-cni-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255788 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-os-release\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-var-lib-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-config\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-env-overrides\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-etc-kubernetes\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-cni-multus\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-conf-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255893 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-os-release\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55b96fce-0e56-40cb-ab90-873a8421260b-multus-daemon-config\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfckw\" (UniqueName: \"kubernetes.io/projected/439475ac-7f06-4a47-9a81-9f4cf4083c38-kube-api-access-nfckw\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spqq9\" (UniqueName: \"kubernetes.io/projected/f25c3eac-cf85-400e-be55-e093858a48be-kube-api-access-spqq9\") pod \"node-resolver-xksvx\" (UID: \"f25c3eac-cf85-400e-be55-e093858a48be\") " pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-cni-bin\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255973 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-log-socket\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.255987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-rootfs\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-kubelet\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256016 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-ovn\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-node-log\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256045 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-bin\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256060 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f25c3eac-cf85-400e-be55-e093858a48be-hosts-file\") pod \"node-resolver-xksvx\" (UID: \"f25c3eac-cf85-400e-be55-e093858a48be\") " pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-multus-certs\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b7537ef-8ad8-4901-a2db-1881d2754684-cni-binary-copy\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdtr\" (UniqueName: \"kubernetes.io/projected/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-kube-api-access-trdtr\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-cni-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-netns\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-netns\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-etc-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-netd\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovn-node-metrics-cert\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256231 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-netns\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-proxy-tls\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-systemd\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-mcd-auth-proxy-config\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256349 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-etc-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256359 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-slash\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256386 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-cnibin\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-kubelet\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256804 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55b96fce-0e56-40cb-ab90-873a8421260b-cni-binary-copy\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-netns\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256833 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-k8s-cni-cncf-io\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-netd\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-hostroot\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-cni-multus\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-conf-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-cnibin\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.256911 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-os-release\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257471 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55b96fce-0e56-40cb-ab90-873a8421260b-multus-daemon-config\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-ovn-kubernetes\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-var-lib-cni-bin\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257753 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-log-socket\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257774 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-rootfs\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-kubelet\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257815 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-ovn\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-node-log\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-bin\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257880 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f25c3eac-cf85-400e-be55-e093858a48be-hosts-file\") pod \"node-resolver-xksvx\" (UID: \"f25c3eac-cf85-400e-be55-e093858a48be\") " pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-host-run-multus-certs\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-script-lib\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.257979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-system-cni-dir\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258341 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b7537ef-8ad8-4901-a2db-1881d2754684-cni-binary-copy\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b7537ef-8ad8-4901-a2db-1881d2754684-os-release\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-multus-socket-dir-parent\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-systemd-units\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258588 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-system-cni-dir\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-config\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.258983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-var-lib-openvswitch\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.259247 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-env-overrides\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.259296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55b96fce-0e56-40cb-ab90-873a8421260b-etc-kubernetes\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.260268 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b7537ef-8ad8-4901-a2db-1881d2754684-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.265971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovn-node-metrics-cert\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.266012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-proxy-tls\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.272267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.272325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.272333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.272346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.272356 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.289664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ldq9\" (UniqueName: \"kubernetes.io/projected/55b96fce-0e56-40cb-ab90-873a8421260b-kube-api-access-4ldq9\") pod \"multus-b68x4\" (UID: \"55b96fce-0e56-40cb-ab90-873a8421260b\") " pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.291455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdtr\" (UniqueName: \"kubernetes.io/projected/0403e92c-3d00-4092-a6d0-cdbc36b3ec1c-kube-api-access-trdtr\") pod \"machine-config-daemon-mxh7m\" (UID: \"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\") " pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.291599 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.295175 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfckw\" (UniqueName: \"kubernetes.io/projected/439475ac-7f06-4a47-9a81-9f4cf4083c38-kube-api-access-nfckw\") pod \"ovnkube-node-996v7\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.306886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4xz\" (UniqueName: \"kubernetes.io/projected/7b7537ef-8ad8-4901-a2db-1881d2754684-kube-api-access-pp4xz\") pod \"multus-additional-cni-plugins-5zck7\" (UID: \"7b7537ef-8ad8-4901-a2db-1881d2754684\") " pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.308765 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqq9\" (UniqueName: \"kubernetes.io/projected/f25c3eac-cf85-400e-be55-e093858a48be-kube-api-access-spqq9\") pod \"node-resolver-xksvx\" (UID: \"f25c3eac-cf85-400e-be55-e093858a48be\") " pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.313000 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.316997 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b68x4" Dec 12 04:33:52 crc kubenswrapper[4796]: W1212 04:33:52.328218 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55b96fce_0e56_40cb_ab90_873a8421260b.slice/crio-a1b83204e97ad394f091bfb05c2e2bb524f1f5ae00ab27c7f555c4e11d581b99 WatchSource:0}: Error finding container a1b83204e97ad394f091bfb05c2e2bb524f1f5ae00ab27c7f555c4e11d581b99: Status 404 returned error can't find the container with id a1b83204e97ad394f091bfb05c2e2bb524f1f5ae00ab27c7f555c4e11d581b99 Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.332107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5zck7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.335784 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.337883 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:33:52 crc kubenswrapper[4796]: W1212 04:33:52.342682 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b7537ef_8ad8_4901_a2db_1881d2754684.slice/crio-93073ee77ba12a6ac2f3b019e6e29f26b24cbc1973c547a180de6dc129ce0a3e WatchSource:0}: Error finding container 93073ee77ba12a6ac2f3b019e6e29f26b24cbc1973c547a180de6dc129ce0a3e: Status 404 returned error can't find the container with id 93073ee77ba12a6ac2f3b019e6e29f26b24cbc1973c547a180de6dc129ce0a3e Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.344526 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.368858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.388378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.388410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.388418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.388432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.388443 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.406980 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.410458 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:52 crc kubenswrapper[4796]: E1212 04:33:52.410614 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.410645 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:52 crc kubenswrapper[4796]: E1212 04:33:52.410725 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.438714 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.452414 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-12 04:28:51 +0000 UTC, rotation deadline is 2026-08-31 19:22:26.902770331 +0000 UTC Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.452473 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6302h48m34.450299845s for next certificate rotation Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.458163 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.499015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.499051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.499060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.499074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.499083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.593740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.593778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"fedc6cca85bda73ddd8e62327cffb58c342ab852d655927a0ff8fff176664394"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.595931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerStarted","Data":"93073ee77ba12a6ac2f3b019e6e29f26b24cbc1973c547a180de6dc129ce0a3e"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.597253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerStarted","Data":"3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.597311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerStarted","Data":"a1b83204e97ad394f091bfb05c2e2bb524f1f5ae00ab27c7f555c4e11d581b99"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.598430 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" exitCode=0 Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.598917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.598943 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"e4251de98e807675579ba5b1b3f2a10a55f2438c66a1cf3c461ac0b9468fb09c"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.600258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.600302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.600311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.600324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.600335 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.607256 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xksvx" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.613652 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: W1212 04:33:52.627142 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25c3eac_cf85_400e_be55_e093858a48be.slice/crio-7d470753126cc6830a087ddf2a190ccabc89a0ccf1499e1d29d9ceb29626c8df WatchSource:0}: Error finding container 7d470753126cc6830a087ddf2a190ccabc89a0ccf1499e1d29d9ceb29626c8df: Status 404 returned error can't find the container with id 7d470753126cc6830a087ddf2a190ccabc89a0ccf1499e1d29d9ceb29626c8df Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.627731 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.645874 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.663918 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.677596 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.699786 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.702353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.702404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.702414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.702427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.702437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.721095 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.744120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.761345 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.774768 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.786507 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.802031 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.806132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.806170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.806178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.806194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.806209 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.821653 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.832084 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.842570 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.853699 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.869907 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.885455 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.908436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.908464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.908475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.908491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.908502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:52Z","lastTransitionTime":"2025-12-12T04:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.913790 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.929051 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.941787 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.960233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:52 crc kubenswrapper[4796]: I1212 04:33:52.997866 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.011092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.011122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.011130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.011143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.011155 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.040802 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.060160 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.085687 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.099544 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.114049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.115008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.115049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.115059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.115074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.115083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.217309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.217335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.217345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.217357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.217366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.319733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.320034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.320044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.320058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.320068 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.413110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:53 crc kubenswrapper[4796]: E1212 04:33:53.413212 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.421269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.421317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.421325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.421336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.421344 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.523148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.523172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.523179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.523191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.523200 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.605523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.605563 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.605571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.605580 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.605589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.605597 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.612510 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.613700 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b7537ef-8ad8-4901-a2db-1881d2754684" containerID="9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c" exitCode=0 Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.613777 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerDied","Data":"9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.617126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xksvx" event={"ID":"f25c3eac-cf85-400e-be55-e093858a48be","Type":"ContainerStarted","Data":"b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.617168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xksvx" event={"ID":"f25c3eac-cf85-400e-be55-e093858a48be","Type":"ContainerStarted","Data":"7d470753126cc6830a087ddf2a190ccabc89a0ccf1499e1d29d9ceb29626c8df"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.627365 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.627487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.627519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.627548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.627566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.627577 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.638836 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.650745 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.663580 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.681795 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.699829 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.711080 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.722548 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.730976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.731009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.731017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.731031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.731041 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.734466 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.758346 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.767538 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.784208 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.795103 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.806940 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.817949 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.830456 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.832594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.832628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.832639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.832654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.832663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.842767 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.852690 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.863551 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.875560 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.887358 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.900137 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.911108 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.934356 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.934853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.934891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.934904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.934919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.934930 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:53Z","lastTransitionTime":"2025-12-12T04:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.946734 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.967298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.983702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:53 crc kubenswrapper[4796]: I1212 04:33:53.996012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:53Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.041058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.041102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.041114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.041130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.041143 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.143454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.143517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.143535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.143559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.143576 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.246992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.247039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.247050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.247069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.247082 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.350558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.350621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.350636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.350663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.350680 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.410309 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.410377 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:54 crc kubenswrapper[4796]: E1212 04:33:54.410473 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:54 crc kubenswrapper[4796]: E1212 04:33:54.410744 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.452752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.452798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.452811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.452828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.452843 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.556055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.556102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.556117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.556137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.556152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.622050 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b7537ef-8ad8-4901-a2db-1881d2754684" containerID="0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41" exitCode=0 Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.622161 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerDied","Data":"0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.646055 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.660066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.660146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.660172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.660204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.660230 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.668671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.673892 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bs8p8"] Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.674262 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.676603 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.676698 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.676741 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.677640 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.685435 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.698583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.712512 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.723817 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.742299 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.762475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.763413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.763452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.763464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.763480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.763492 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.776805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.781910 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d20be8d8-badc-477e-92e1-6c4be36a08fa-host\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.782145 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d20be8d8-badc-477e-92e1-6c4be36a08fa-serviceca\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.782164 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9nc\" (UniqueName: \"kubernetes.io/projected/d20be8d8-badc-477e-92e1-6c4be36a08fa-kube-api-access-pw9nc\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.789166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.802396 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.815999 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.827336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.838870 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.853015 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.864943 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.865612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.865647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.865659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.865675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.865685 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.875687 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.883443 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d20be8d8-badc-477e-92e1-6c4be36a08fa-host\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.883476 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d20be8d8-badc-477e-92e1-6c4be36a08fa-serviceca\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.883494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9nc\" (UniqueName: \"kubernetes.io/projected/d20be8d8-badc-477e-92e1-6c4be36a08fa-kube-api-access-pw9nc\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.883565 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d20be8d8-badc-477e-92e1-6c4be36a08fa-host\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.884987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d20be8d8-badc-477e-92e1-6c4be36a08fa-serviceca\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.885738 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.898785 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.905300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9nc\" (UniqueName: \"kubernetes.io/projected/d20be8d8-badc-477e-92e1-6c4be36a08fa-kube-api-access-pw9nc\") pod \"node-ca-bs8p8\" (UID: \"d20be8d8-badc-477e-92e1-6c4be36a08fa\") " pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.939877 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.958870 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.970500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.970542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.970552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.970566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.970580 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:54Z","lastTransitionTime":"2025-12-12T04:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.979219 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.987789 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bs8p8" Dec 12 04:33:54 crc kubenswrapper[4796]: I1212 04:33:54.990741 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.000583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:54Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.015422 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.027386 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.043889 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.057036 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.073176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.073208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.073217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.073231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.073240 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.074412 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.085219 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.085313 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:34:03.085261524 +0000 UTC m=+33.961278671 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.175555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.175595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.175603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.175619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.175666 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.186832 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.186905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.186955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.186993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187175 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187212 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187214 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187233 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187295 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:03.18726427 +0000 UTC m=+34.063281417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187297 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187323 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187333 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187349 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:03.187327022 +0000 UTC m=+34.063344209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187393 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:03.187377154 +0000 UTC m=+34.063394301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187460 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.187498 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:03.187488327 +0000 UTC m=+34.063505564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.278611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.278644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.278653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.278665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.278675 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.381575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.381625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.381639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.381658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.381672 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.411434 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:55 crc kubenswrapper[4796]: E1212 04:33:55.411594 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.484647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.484683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.484697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.484713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.484723 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.587547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.587600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.587619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.587645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.587666 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.627908 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bs8p8" event={"ID":"d20be8d8-badc-477e-92e1-6c4be36a08fa","Type":"ContainerStarted","Data":"7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.627988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bs8p8" event={"ID":"d20be8d8-badc-477e-92e1-6c4be36a08fa","Type":"ContainerStarted","Data":"25c7ecf1690f047323e400ec84518ae37a6bcae7fb2e23537622f258cc56c963"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.630948 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b7537ef-8ad8-4901-a2db-1881d2754684" containerID="c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266" exitCode=0 Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.631009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerDied","Data":"c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.653388 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.670102 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.685127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.691663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.691709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.691731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.691759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.691783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.703977 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.722789 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.736644 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.757016 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.766696 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.789216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.797961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.798013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.798026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.798054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.798066 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.813876 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.831242 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.844174 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.857191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.870742 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.880292 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.897313 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.902561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.902598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.902608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.902623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.902634 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:55Z","lastTransitionTime":"2025-12-12T04:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.908864 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.920859 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.931229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.943861 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.954033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.966804 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.976925 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.986752 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:55 crc kubenswrapper[4796]: I1212 04:33:55.997476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:55Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.004677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.004714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.004724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.004738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.004748 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.008247 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.018632 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.035159 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.049968 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.097735 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.107404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.107441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.107452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.107469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.107481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.209685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.209728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.209740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.209756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.209769 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.312665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.312748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.312769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.312794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.312833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.411210 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.411307 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:56 crc kubenswrapper[4796]: E1212 04:33:56.411417 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:56 crc kubenswrapper[4796]: E1212 04:33:56.411570 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.415320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.415360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.415376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.415394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.415408 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.517603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.517719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.517730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.517746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.517758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.620311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.620353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.620361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.620377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.620386 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.638013 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.640787 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b7537ef-8ad8-4901-a2db-1881d2754684" containerID="e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab" exitCode=0 Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.640817 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerDied","Data":"e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.658489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.675256 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.688356 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.701529 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.714961 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.722960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.722985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.722992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.723005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.723014 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.728656 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.745726 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.756916 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.783767 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.797038 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.808527 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.824419 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.824770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.824800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.824809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.824821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.824829 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.836463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.851550 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.870553 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:56Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.928437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.928507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.928516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.928531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:56 crc kubenswrapper[4796]: I1212 04:33:56.928539 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:56Z","lastTransitionTime":"2025-12-12T04:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.030433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.030463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.030472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.030484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.030492 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.133648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.133688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.133699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.133715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.133728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.236261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.236312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.236323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.236354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.236363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.339371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.339437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.339454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.339478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.339495 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.410436 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:57 crc kubenswrapper[4796]: E1212 04:33:57.410596 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.442378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.442421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.442435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.442452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.442467 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.545024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.545059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.545067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.545084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.545095 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.646886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.647420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.647513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.647620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.647700 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.649415 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b7537ef-8ad8-4901-a2db-1881d2754684" containerID="bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986" exitCode=0 Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.649492 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerDied","Data":"bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.675159 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.693188 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.719589 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.735896 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.754306 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.761019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.761057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.761069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.761086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.761100 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.767298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.783108 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.805412 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.823025 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.834265 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.850890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.863499 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.864434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.864453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.864460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.864474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.864482 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.883822 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.898141 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.913101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:57Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.967025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.967054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.967063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.967075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:57 crc kubenswrapper[4796]: I1212 04:33:57.967085 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:57Z","lastTransitionTime":"2025-12-12T04:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.068890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.068928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.068939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.068956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.068968 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.172246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.172320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.172334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.172351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.172364 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.275417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.275479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.275498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.275528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.275547 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.377617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.377655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.377666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.377683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.377694 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.411146 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.411146 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:33:58 crc kubenswrapper[4796]: E1212 04:33:58.411338 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:33:58 crc kubenswrapper[4796]: E1212 04:33:58.411424 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.481630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.481676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.481688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.481704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.481716 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.584802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.584868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.584914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.584951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.584971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.659546 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b7537ef-8ad8-4901-a2db-1881d2754684" containerID="e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f" exitCode=0 Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.659641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerDied","Data":"e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.667983 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.668575 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.668848 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.710725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.710790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.710802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.710819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.710832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.711039 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.726076 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.726361 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.740587 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.740666 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.754423 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.771865 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.788411 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.803322 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.812425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.812460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.812473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.812488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.812499 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.816229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.829677 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.845475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.857119 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.866617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.885808 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.897305 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.915084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.915132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.915144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.915163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.915179 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:58Z","lastTransitionTime":"2025-12-12T04:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.920071 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.939543 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.949831 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.959509 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.973068 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.984257 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:58 crc kubenswrapper[4796]: I1212 04:33:58.994750 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:58Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.004228 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.016052 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.017722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.017752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.017763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.017779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.017790 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.028840 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.042353 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.054632 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.062980 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.072269 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.090825 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.120509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.120546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.120558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.120574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.120587 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.148894 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.223170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.223213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.223224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.223240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.223252 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.262124 4796 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.325202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.325475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.325561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.325641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.325717 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.411134 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:33:59 crc kubenswrapper[4796]: E1212 04:33:59.411233 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.427714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.427748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.427757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.427770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.427781 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.437106 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.450954 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.468966 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.483500 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.499956 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.517494 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.530299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.530368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.530385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.530406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.530424 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.537370 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.555558 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.571429 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.587554 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.598441 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.614088 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.625107 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.632400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.632427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.632435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.632447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.632455 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.641267 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.661071 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.674377 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.675352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" event={"ID":"7b7537ef-8ad8-4901-a2db-1881d2754684","Type":"ContainerStarted","Data":"6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.691428 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.710281 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.724412 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.734262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.734319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.734332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.734351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.734363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.744355 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.766524 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.782633 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.795122 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.810062 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.827193 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.837181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.837537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.837548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.837574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.837583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.844970 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.855891 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.870193 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.880042 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.896889 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.912814 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:33:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.939597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.939633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.939644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.939659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:33:59 crc kubenswrapper[4796]: I1212 04:33:59.939667 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:33:59Z","lastTransitionTime":"2025-12-12T04:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.041983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.042028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.042040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.042057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.042068 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.144797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.144847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.144883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.144912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.144923 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.247402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.247436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.247445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.247458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.247468 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.350561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.350615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.350624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.350639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.350648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.410623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.410634 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.410800 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.410943 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.453957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.454011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.454028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.454052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.454072 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.556954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.557029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.557045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.557065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.557147 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.660401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.660435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.660447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.660463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.660475 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.679403 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.764636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.764682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.764708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.764737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.764762 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.777892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.777946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.777961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.777981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.777998 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.792887 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:00Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.795995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.796030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.796043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.796059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.796069 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.811218 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:00Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.814880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.814913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.814924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.814940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.814950 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.826665 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:00Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.830414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.830452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.830463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.830480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.830492 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.844018 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:00Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.852144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.852180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.852198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.852216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.852465 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.864592 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:00Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:00 crc kubenswrapper[4796]: E1212 04:34:00.864746 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.870799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.870870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.870884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.870901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.870912 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.975823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.975866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.975877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.975894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:00 crc kubenswrapper[4796]: I1212 04:34:00.975906 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:00Z","lastTransitionTime":"2025-12-12T04:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.078161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.078231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.078242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.078259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.078273 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.180658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.180701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.180709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.180724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.180733 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.283443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.283499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.283512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.283528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.283539 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.386683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.386721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.386730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.386743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.386752 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.411404 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:01 crc kubenswrapper[4796]: E1212 04:34:01.411548 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.489515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.489586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.489611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.489640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.489662 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.591708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.591761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.591778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.591800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.591817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.685648 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/0.log" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.690671 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937" exitCode=1 Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.690773 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.692032 4796 scope.go:117] "RemoveContainer" containerID="57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.697419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.697474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.697492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.697513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.697532 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.718904 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.742939 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.763991 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.783793 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.799778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.799819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.799829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.799844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.799854 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.804858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.816667 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.828545 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.841297 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.852227 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.864538 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.881896 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.891906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.903688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.903718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.903727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.903740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.903753 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:01Z","lastTransitionTime":"2025-12-12T04:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.913716 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.923583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:01 crc kubenswrapper[4796]: I1212 04:34:01.942223 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:01Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.009985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.010047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.010065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.010089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.010116 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.112584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.112651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.112665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.112682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.112694 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.214399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.214438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.214450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.214466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.214478 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.316852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.316883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.316892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.316904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.316913 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.411102 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.411102 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:02 crc kubenswrapper[4796]: E1212 04:34:02.411252 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:02 crc kubenswrapper[4796]: E1212 04:34:02.411360 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.418460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.418493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.418501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.418513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.418523 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.521123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.521149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.521157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.521181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.521191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.624188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.624230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.624245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.624264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.624281 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.696632 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/0.log" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.701111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.701261 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.716897 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.726641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.726672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.726681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.726695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.726704 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.737221 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.770611 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.785614 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.796335 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.808294 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.818053 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.828880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.829100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.829184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.829270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.828921 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.829379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.839778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.849288 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.860551 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.872714 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.884844 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.898430 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.913045 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:02Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.931751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.931790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.931801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.931818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:02 crc kubenswrapper[4796]: I1212 04:34:02.931832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:02Z","lastTransitionTime":"2025-12-12T04:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.034847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.034889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.034903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.034920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.034930 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.138177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.138252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.138276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.138345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.138412 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.169038 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.169359 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:34:19.169288354 +0000 UTC m=+50.045305531 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.246539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.246582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.246592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.246610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.246622 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.270340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.270638 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.270486 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.270790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.270886 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:19.270863717 +0000 UTC m=+50.146880864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.270939 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271054 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271074 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271086 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271114 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:19.271105744 +0000 UTC m=+50.147122891 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.270769 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271139 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:19.271133665 +0000 UTC m=+50.147150812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271446 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271523 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271629 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.271822 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:19.271801535 +0000 UTC m=+50.147818682 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.349572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.349623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.349633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.349649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.349659 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.411141 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.411466 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.451953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.452051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.452074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.452102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.452124 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.555716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.555794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.555817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.555846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.555871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.657857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.657904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.657920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.657939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.657952 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.706646 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/1.log" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.707373 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/0.log" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.719009 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222" exitCode=1 Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.719063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.719131 4796 scope.go:117] "RemoveContainer" containerID="57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.720643 4796 scope.go:117] "RemoveContainer" containerID="7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222" Dec 12 04:34:03 crc kubenswrapper[4796]: E1212 04:34:03.720984 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.743361 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.758574 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.764164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.764198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.764207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.764219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.764229 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.780749 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.802509 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.819829 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.838786 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.851079 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.864942 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.866595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.866632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.866648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.866664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.866676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.882430 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.901141 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.914951 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.934419 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.949893 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.962386 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.968744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.968799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.968817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.968839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.968859 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:03Z","lastTransitionTime":"2025-12-12T04:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:03 crc kubenswrapper[4796]: I1212 04:34:03.976670 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:03Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.071390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.071446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.071455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.071470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.071481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.174105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.174147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.174158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.174175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.174191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.276821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.276865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.276874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.276895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.276906 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.379403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.379467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.379481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.379502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.379514 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.410695 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:04 crc kubenswrapper[4796]: E1212 04:34:04.410900 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.410730 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:04 crc kubenswrapper[4796]: E1212 04:34:04.411505 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.482579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.482628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.482640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.482657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.482693 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.586043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.586094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.586103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.586121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.586132 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.689599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.690008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.690124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.690220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.690312 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.726051 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/1.log" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.793366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.793420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.793432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.793451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.793464 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.896050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.896113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.896126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.896149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.896162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.998395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.998444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.998459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.998480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:04 crc kubenswrapper[4796]: I1212 04:34:04.998494 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:04Z","lastTransitionTime":"2025-12-12T04:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.095136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl"] Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.096201 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.098299 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.100582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.100623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.100634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.100651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.100665 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.101054 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.112163 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.124674 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.136136 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.151353 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.163583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.176394 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.186053 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.193382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdee9958-1438-462c-b4d5-e5d7ba66483b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.193452 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdee9958-1438-462c-b4d5-e5d7ba66483b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.193519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbc4\" (UniqueName: \"kubernetes.io/projected/bdee9958-1438-462c-b4d5-e5d7ba66483b-kube-api-access-mlbc4\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.193565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdee9958-1438-462c-b4d5-e5d7ba66483b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.203245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.203349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.203393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.203412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.203452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.205009 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.214296 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.231061 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.245264 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.256464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.277786 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.294502 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdee9958-1438-462c-b4d5-e5d7ba66483b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.294570 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdee9958-1438-462c-b4d5-e5d7ba66483b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.294660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbc4\" (UniqueName: \"kubernetes.io/projected/bdee9958-1438-462c-b4d5-e5d7ba66483b-kube-api-access-mlbc4\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.294811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdee9958-1438-462c-b4d5-e5d7ba66483b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.295696 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdee9958-1438-462c-b4d5-e5d7ba66483b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.296117 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdee9958-1438-462c-b4d5-e5d7ba66483b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.308718 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.309455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdee9958-1438-462c-b4d5-e5d7ba66483b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.316572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.316607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.316617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.316635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.316646 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.333257 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbc4\" (UniqueName: \"kubernetes.io/projected/bdee9958-1438-462c-b4d5-e5d7ba66483b-kube-api-access-mlbc4\") pod \"ovnkube-control-plane-749d76644c-h7crl\" (UID: \"bdee9958-1438-462c-b4d5-e5d7ba66483b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.341359 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.351793 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.409963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.411264 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:05 crc kubenswrapper[4796]: E1212 04:34:05.411372 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.418687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.418733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.418742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.418759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.418768 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.521547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.521600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.521611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.521631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.521646 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.624548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.624619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.624643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.624678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.624701 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.726777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.726828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.726845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.726868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.726884 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.732750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" event={"ID":"bdee9958-1438-462c-b4d5-e5d7ba66483b","Type":"ContainerStarted","Data":"a1e35a449bd135af48a63e41594597c361ff599bc80b475b7764234ddf5a09b1"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.829917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.829988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.830010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.830038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.830057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.905416 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.927139 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.933286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.933344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.933356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.933371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.933382 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:05Z","lastTransitionTime":"2025-12-12T04:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.948356 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.960813 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.972509 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.985891 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:05 crc kubenswrapper[4796]: I1212 04:34:05.997406 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:05Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.009034 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.020557 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.029757 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.035502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.035531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.035539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.035553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.035562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.040983 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.052638 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.065883 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.079374 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.104677 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.117403 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.134849 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.136965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.136987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.136996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.137009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.137018 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.188476 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ftpgk"] Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.188860 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.188915 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.200905 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.212746 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.221285 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.231843 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.241332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.241373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.241384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.241403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.241414 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.246381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.258851 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.270668 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.286896 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.297360 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.304564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75tk\" (UniqueName: \"kubernetes.io/projected/a81191a1-393c-400c-9b7d-6748c4a8fb36-kube-api-access-t75tk\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.304613 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.309989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.319819 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.329178 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.340959 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.343863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.343886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.343895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.343908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.343917 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.355842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.374381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.391764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.400559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.406350 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75tk\" (UniqueName: \"kubernetes.io/projected/a81191a1-393c-400c-9b7d-6748c4a8fb36-kube-api-access-t75tk\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.406428 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.406615 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.406694 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:06.90667547 +0000 UTC m=+37.782692617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.410260 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.410411 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.410508 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.410637 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.422649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75tk\" (UniqueName: \"kubernetes.io/projected/a81191a1-393c-400c-9b7d-6748c4a8fb36-kube-api-access-t75tk\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.445796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.445834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.445845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.445862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.445873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.548576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.548619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.548628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.548642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.548651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.651480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.651533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.651550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.651569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.651588 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.738750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" event={"ID":"bdee9958-1438-462c-b4d5-e5d7ba66483b","Type":"ContainerStarted","Data":"bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.738815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" event={"ID":"bdee9958-1438-462c-b4d5-e5d7ba66483b","Type":"ContainerStarted","Data":"acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.754593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.754655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.754678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.754706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.754724 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.756608 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.776936 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.790184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.807136 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.821777 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.834113 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.846038 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.855712 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.857219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.857257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.857275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.857346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.857364 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.867492 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.881857 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.893903 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.910958 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.911079 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:06 crc kubenswrapper[4796]: E1212 04:34:06.911124 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:07.911110239 +0000 UTC m=+38.787127386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.911094 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.928175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.938187 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.949338 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.958868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.958911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.958923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.958939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.958950 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:06Z","lastTransitionTime":"2025-12-12T04:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.961220 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:06 crc kubenswrapper[4796]: I1212 04:34:06.969525 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:06Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.065002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.065031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.065040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.065062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.065071 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.167331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.167367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.167380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.167397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.167408 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.269619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.269668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.269682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.269713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.269725 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.372791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.372844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.372861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.372889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.372913 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.410717 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.410796 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:07 crc kubenswrapper[4796]: E1212 04:34:07.410908 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:07 crc kubenswrapper[4796]: E1212 04:34:07.411061 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.476001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.476042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.476053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.476070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.476083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.578603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.578681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.578691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.578713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.578723 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.681063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.681098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.681109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.681125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.681133 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.784433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.784550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.784619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.784653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.784716 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.887842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.887904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.887920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.887943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.887960 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.920699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:07 crc kubenswrapper[4796]: E1212 04:34:07.920918 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:07 crc kubenswrapper[4796]: E1212 04:34:07.921021 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:09.920993759 +0000 UTC m=+40.797010916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.990480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.990509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.990516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.990529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:07 crc kubenswrapper[4796]: I1212 04:34:07.990538 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:07Z","lastTransitionTime":"2025-12-12T04:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.093323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.093362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.093372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.093384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.093393 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.196558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.196610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.196623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.196640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.196657 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.299052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.299110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.299125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.299142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.299155 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.402076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.402159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.402178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.402210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.402231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.410602 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.410613 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:08 crc kubenswrapper[4796]: E1212 04:34:08.410762 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:08 crc kubenswrapper[4796]: E1212 04:34:08.410867 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.505740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.505794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.505811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.505834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.505851 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.613248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.613356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.613380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.613409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.613429 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.715743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.715796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.715807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.715824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.715836 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.819141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.819189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.819200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.819218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.819237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.922585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.922653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.922678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.922709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:08 crc kubenswrapper[4796]: I1212 04:34:08.922730 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:08Z","lastTransitionTime":"2025-12-12T04:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.025350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.025406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.025416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.025431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.025440 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.128196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.128263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.128321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.128351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.128371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.231535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.231581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.231599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.231623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.231641 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.340176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.340262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.340318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.340368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.340393 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.411220 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.411369 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:09 crc kubenswrapper[4796]: E1212 04:34:09.411501 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:09 crc kubenswrapper[4796]: E1212 04:34:09.411592 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.436429 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.443404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.443446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.443457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.443472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.443483 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.454085 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.472844 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.497097 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.511584 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.525510 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.538876 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.546851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.546914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.546938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.546969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.546995 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.554931 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.566350 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.585787 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.604480 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.622980 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.638937 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.648934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.649159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.649240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.649368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.649488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.654765 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.666578 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.688777 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57814bbef4507bb2558c0a086cb609fc32e9e0e14fc10b77dc352f9b2ff67937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:00Z\\\",\\\"message\\\":\\\"cy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1212 04:34:00.873169 5974 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873225 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873453 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.873483 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1212 04:34:00.874215 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1212 04:34:00.874229 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1212 04:34:00.874285 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1212 04:34:00.874316 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1212 04:34:00.874331 5974 factory.go:656] Stopping watch factory\\\\nI1212 04:34:00.874343 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1212 04:34:00.874338 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1212 04:34:00.874359 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1212 04:34:00.874362 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.717713 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:09Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.751502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.751807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.751934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.752085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.752214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.855268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.855358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.855378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.855403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.855420 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.942021 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:09 crc kubenswrapper[4796]: E1212 04:34:09.942262 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:09 crc kubenswrapper[4796]: E1212 04:34:09.942382 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:13.942360348 +0000 UTC m=+44.818377565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.957619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.957690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.957711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.957735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:09 crc kubenswrapper[4796]: I1212 04:34:09.957747 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:09Z","lastTransitionTime":"2025-12-12T04:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.060904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.060964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.061000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.061029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.061051 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.164852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.164914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.164934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.164958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.164975 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.268223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.268395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.268416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.268450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.268467 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.371769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.371830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.371847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.371870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.371891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.411056 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.411057 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:10 crc kubenswrapper[4796]: E1212 04:34:10.411421 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:10 crc kubenswrapper[4796]: E1212 04:34:10.411226 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.475870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.475948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.475968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.475998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.476023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.579394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.579457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.579477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.579504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.579522 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.682120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.682179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.682198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.682221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.682238 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.785694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.785752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.785774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.785802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.785824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.888706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.888769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.888793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.888819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.888840 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.992124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.992190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.992214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.992244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:10 crc kubenswrapper[4796]: I1212 04:34:10.992266 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:10Z","lastTransitionTime":"2025-12-12T04:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.095641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.095747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.095775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.095804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.095828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.149606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.149668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.149690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.149718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.149743 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.173639 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:11Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.178544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.178623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.178650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.178681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.178752 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.201581 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:11Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.207464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.207525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.207549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.207581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.207605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.228156 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:11Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.233252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.233334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.233351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.233370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.233385 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.252280 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:11Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.257936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.257993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.258008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.258035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.258050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.280942 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:11Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.281123 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.284271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.284320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.284331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.284348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.284366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.387724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.387777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.387794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.387817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.387834 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.410400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.410454 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.410600 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:11 crc kubenswrapper[4796]: E1212 04:34:11.410757 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.490633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.490714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.490728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.490746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.490758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.593487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.593521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.593535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.593552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.593566 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.695970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.696064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.696084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.696102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.696117 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.798688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.798754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.798777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.798804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.798825 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.902374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.902423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.902441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.902463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:11 crc kubenswrapper[4796]: I1212 04:34:11.902479 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:11Z","lastTransitionTime":"2025-12-12T04:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.004928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.004996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.005016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.005041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.005063 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.107730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.107798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.107822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.107850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.107869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.210472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.210533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.210546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.210571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.210589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.313226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.313264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.313274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.313308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.313318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.411012 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:12 crc kubenswrapper[4796]: E1212 04:34:12.411271 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.411528 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:12 crc kubenswrapper[4796]: E1212 04:34:12.411713 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.415840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.415890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.415909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.415932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.415949 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.518039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.518097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.518114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.518136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.518152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.621130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.621166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.621177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.621192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.621203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.724178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.724212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.724223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.724238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.724249 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.827242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.827330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.827350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.827372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.827388 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.930693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.930765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.930787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.930815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:12 crc kubenswrapper[4796]: I1212 04:34:12.930837 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:12Z","lastTransitionTime":"2025-12-12T04:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.033872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.033925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.033951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.033996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.034017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.137196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.137276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.137371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.137411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.137445 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.239906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.239966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.239989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.240019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.240041 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.343067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.343103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.343112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.343126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.343135 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.411425 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.411477 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:13 crc kubenswrapper[4796]: E1212 04:34:13.411615 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:13 crc kubenswrapper[4796]: E1212 04:34:13.411806 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.445477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.445509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.445517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.445553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.445567 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.548190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.548245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.548261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.548319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.548341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.651364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.651437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.651460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.651485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.651504 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.754246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.754339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.754358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.754381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.754399 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.857357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.857424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.857442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.857467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.857486 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.960552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.960588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.960598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.960613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.960624 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:13Z","lastTransitionTime":"2025-12-12T04:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:13 crc kubenswrapper[4796]: I1212 04:34:13.988610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:13 crc kubenswrapper[4796]: E1212 04:34:13.988853 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:13 crc kubenswrapper[4796]: E1212 04:34:13.988996 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:21.988959173 +0000 UTC m=+52.864976350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.063620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.063654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.063666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.063682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.063693 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.167958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.168023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.168059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.168089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.168113 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.271571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.271634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.271657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.271687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.271724 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.374532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.374609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.374632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.374662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.374685 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.411171 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:14 crc kubenswrapper[4796]: E1212 04:34:14.411373 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.411456 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:14 crc kubenswrapper[4796]: E1212 04:34:14.411539 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.594988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.595061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.595082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.595113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.595134 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.698031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.698082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.698098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.698124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.698149 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.801118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.801174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.801191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.801215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.801232 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.904457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.904520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.904539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.904564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:14 crc kubenswrapper[4796]: I1212 04:34:14.904580 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:14Z","lastTransitionTime":"2025-12-12T04:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.006733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.006802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.006820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.006844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.006861 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.115480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.115531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.115542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.115558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.115570 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.218905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.218946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.218957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.218973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.218984 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.321268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.321349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.321365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.321388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.321404 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.410929 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.410964 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:15 crc kubenswrapper[4796]: E1212 04:34:15.411192 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:15 crc kubenswrapper[4796]: E1212 04:34:15.411323 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.423240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.423292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.423302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.423316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.423326 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.525329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.525366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.525377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.525394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.525406 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.628167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.628202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.628213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.628226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.628237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.731068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.731111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.731122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.731141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.731153 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.833352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.833400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.833416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.833437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.833453 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.939206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.939260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.939322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.939342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:15 crc kubenswrapper[4796]: I1212 04:34:15.939379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:15Z","lastTransitionTime":"2025-12-12T04:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.042123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.042158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.042167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.042182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.042191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.144984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.145019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.145026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.145038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.145046 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.248712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.248790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.248809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.248833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.248852 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.353545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.353636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.353657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.353681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.353730 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.410782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.410849 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:16 crc kubenswrapper[4796]: E1212 04:34:16.410969 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:16 crc kubenswrapper[4796]: E1212 04:34:16.411225 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.457444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.457512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.457525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.457562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.457576 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.560882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.560962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.560974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.560992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.561003 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.663430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.663488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.663506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.663533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.663552 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.767082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.767145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.767163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.767188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.767205 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.870049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.870111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.870128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.870152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.870169 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.973481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.973792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.973981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.974146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:16 crc kubenswrapper[4796]: I1212 04:34:16.974329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:16Z","lastTransitionTime":"2025-12-12T04:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.077807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.077860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.077876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.077899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.077916 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.181153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.181215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.181237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.181260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.181362 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.284305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.284613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.284743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.284886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.285007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.388866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.388916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.388933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.388957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.388974 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.411084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.411119 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:17 crc kubenswrapper[4796]: E1212 04:34:17.411506 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:17 crc kubenswrapper[4796]: E1212 04:34:17.411274 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.491797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.491865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.492068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.492095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.492113 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.595805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.596133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.596258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.596414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.596557 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.699700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.699765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.699790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.699819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.699840 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.803101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.803483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.803670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.803852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.804018 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.906961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.907449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.907652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.907844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:17 crc kubenswrapper[4796]: I1212 04:34:17.908051 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:17Z","lastTransitionTime":"2025-12-12T04:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.011202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.011269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.011326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.011358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.011381 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.115098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.115145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.115158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.115176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.115190 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.217910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.218272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.218511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.218718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.218948 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.322790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.322858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.322882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.322910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.322932 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.410384 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.410402 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:18 crc kubenswrapper[4796]: E1212 04:34:18.410578 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:18 crc kubenswrapper[4796]: E1212 04:34:18.410711 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.411775 4796 scope.go:117] "RemoveContainer" containerID="7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.425652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.425718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.425740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.425770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.425790 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.437715 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.461061 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.482537 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.506666 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.512516 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.525053 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.527974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.528016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.528031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.528051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.528065 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.544124 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.559127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.572024 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.586352 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.598966 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.611578 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.626717 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.629718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.629746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.629753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.629768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.629777 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.639609 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.651855 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.667979 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.676814 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.696443 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.741673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.741713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.741728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.741748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.741763 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.783809 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/1.log" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.787084 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.787940 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.800409 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.819635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.838243 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.844614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.845543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.845678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.845802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.845924 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.862951 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.883488 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.898719 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.909241 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.931342 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.947592 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.948660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.948693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.948702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.948719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.948728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:18Z","lastTransitionTime":"2025-12-12T04:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.974640 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:18 crc kubenswrapper[4796]: I1212 04:34:18.988356 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:18.999900 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:18Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.011835 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.023923 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.047459 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.050490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.050522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.050531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.050547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.050557 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.067819 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.079795 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.152927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.152954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.152962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.152976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.152985 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.249180 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.249369 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:34:51.249342589 +0000 UTC m=+82.125359736 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.255560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.255717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.255811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.255912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.255990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.351215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351425 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.351386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351526 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:51.351496 +0000 UTC m=+82.227513187 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351544 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.351572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351617 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:51.351594743 +0000 UTC m=+82.227611920 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.351662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351732 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351763 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351787 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351848 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:51.35182713 +0000 UTC m=+82.227844317 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351848 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351889 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351910 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.351967 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:51.351948134 +0000 UTC m=+82.227965311 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.359160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.359446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.359654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.359812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.359957 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.410358 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.410561 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.410836 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.411177 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.436877 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.462216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.462248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.462258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.462292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.462304 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.470338 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.486511 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.506833 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.523173 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.532960 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.543801 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.557182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.564724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.564907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.565009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.565101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.565182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.569006 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.584101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.594312 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.604527 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.614198 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.621793 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.630580 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.640599 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.651611 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.667645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.667703 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.667726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.667756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.667779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.735631 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.748578 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.762381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.770429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.770459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.770471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.770487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.770498 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.783595 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.791378 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/2.log" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.791938 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/1.log" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.795071 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8" exitCode=1 Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.796000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.796265 4796 scope.go:117] "RemoveContainer" containerID="36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.796428 4796 scope.go:117] "RemoveContainer" containerID="7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222" Dec 12 04:34:19 crc kubenswrapper[4796]: E1212 04:34:19.796478 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.798191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.828574 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.844174 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.856880 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.870039 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.873074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.873102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.873113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.873129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.873140 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.886132 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.902957 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.923341 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.942120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.954345 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.968383 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.975583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.975618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.975626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.975640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.975651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:19Z","lastTransitionTime":"2025-12-12T04:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.978908 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.988803 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:19 crc kubenswrapper[4796]: I1212 04:34:19.999400 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.011384 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.026496 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.041063 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.052875 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.066453 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.078189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.078252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.078303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.078336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.078359 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.084395 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.108250 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.127414 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.146335 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.159604 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.178441 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.180356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.180400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.180418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.180441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.180456 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.191365 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.209705 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.228452 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.246880 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.276861 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.283152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.283197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.283211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.283232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.283245 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.289956 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.309758 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.328262 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f81b348ee8cf24567941faf7dbd2df531b62beab12acfe329804ceb5e074222\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:02Z\\\",\\\"message\\\":\\\"/kube-controller-manager-crc in node crc\\\\nI1212 04:34:02.491412 6122 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1212 04:34:02.491416 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491419 6122 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1212 04:34:02.491422 6122 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1212 04:34:02.491422 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1212 04:34:02.491401 6122 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1212 04:34:02.491409 6122 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-mxh7m\\\\nF1212 04:34:02.491434 6122 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization,\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.386170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.386222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.386238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.386261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.386304 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.410809 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.410826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:20 crc kubenswrapper[4796]: E1212 04:34:20.411358 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:20 crc kubenswrapper[4796]: E1212 04:34:20.411742 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.489622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.490032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.490183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.490377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.490544 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.594455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.594522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.594540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.594564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.594587 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.698171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.698628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.698783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.698973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.699149 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.801044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.801093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.801105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.801125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.801137 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.801505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/2.log" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.805725 4796 scope.go:117] "RemoveContainer" containerID="36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8" Dec 12 04:34:20 crc kubenswrapper[4796]: E1212 04:34:20.805904 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.827915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.848180 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.874094 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.889949 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.903744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.903903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.903983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.904068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.904186 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:20Z","lastTransitionTime":"2025-12-12T04:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.912101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.928670 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.947143 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.969464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:20 crc kubenswrapper[4796]: I1212 04:34:20.987859 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:20Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.007751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.008020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.008157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.008110 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.008643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.008758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.030333 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.060408 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.077086 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.091777 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.112337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.112378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.112391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.112412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.112425 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.118264 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.136182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.153060 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.165138 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.215269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.215358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.215375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.215396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.215412 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.318147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.318207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.318228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.318251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.318264 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.410885 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.410953 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.411210 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.411504 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.420785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.420935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.420956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.420984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.421010 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.524581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.524654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.524674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.524698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.524715 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.627511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.627605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.627625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.627652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.627687 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.656119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.656209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.656229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.656261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.656309 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.675257 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.681963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.682018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.682032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.682052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.682065 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.699647 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.704204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.704307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.704324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.704370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.704387 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.722445 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.727224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.727484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.727574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.727672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.727742 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.742956 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.746952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.747000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.747010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.747024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.747033 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.764602 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:21Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:21 crc kubenswrapper[4796]: E1212 04:34:21.764755 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.766564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.766619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.766636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.766659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.766676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.869736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.869806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.869835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.869867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.869891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.974182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.974582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.974672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.974759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:21 crc kubenswrapper[4796]: I1212 04:34:21.974845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:21Z","lastTransitionTime":"2025-12-12T04:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.077496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.078372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.078386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.078398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.078409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.083312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:22 crc kubenswrapper[4796]: E1212 04:34:22.083482 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:22 crc kubenswrapper[4796]: E1212 04:34:22.083543 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:34:38.083527339 +0000 UTC m=+68.959544486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.180930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.180994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.181006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.181022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.181057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.284537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.284591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.284603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.284618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.284628 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.387670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.388112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.388376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.388610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.388808 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.410959 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:22 crc kubenswrapper[4796]: E1212 04:34:22.411128 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.411256 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:22 crc kubenswrapper[4796]: E1212 04:34:22.411614 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.491796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.491851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.491868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.491891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.491908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.594602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.594717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.594743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.594765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.594781 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.696910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.696938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.696947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.696959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.696967 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.800222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.800316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.800344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.800369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.800386 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.903572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.904087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.904404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.904667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:22 crc kubenswrapper[4796]: I1212 04:34:22.904900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:22Z","lastTransitionTime":"2025-12-12T04:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.008101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.008176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.008200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.008229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.008250 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.110669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.110709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.110721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.110737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.110748 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.213934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.214365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.214386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.214411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.214433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.316491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.316765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.316978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.317227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.317484 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.411504 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:23 crc kubenswrapper[4796]: E1212 04:34:23.411718 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.411886 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:23 crc kubenswrapper[4796]: E1212 04:34:23.412168 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.419618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.419876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.419971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.420060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.420140 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.527625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.527706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.527730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.527761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.527782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.630201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.630231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.630238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.630251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.630259 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.733492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.733529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.733540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.733556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.733570 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.836702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.836786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.836815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.836848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.836874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.940250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.940474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.940575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.940613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:23 crc kubenswrapper[4796]: I1212 04:34:23.940639 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:23Z","lastTransitionTime":"2025-12-12T04:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.042743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.042789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.042801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.042818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.042830 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.145453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.145493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.145525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.145543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.145553 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.249735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.249786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.249799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.249818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.249832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.351726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.351806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.351822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.351843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.351857 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.410607 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.410630 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:24 crc kubenswrapper[4796]: E1212 04:34:24.410772 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:24 crc kubenswrapper[4796]: E1212 04:34:24.410868 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.454883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.454965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.454989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.455024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.455049 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.557734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.557789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.557828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.557850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.557864 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.660094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.660132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.660144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.660160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.660172 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.762822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.762864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.762873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.762890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.762900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.865809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.865872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.865895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.865929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.865952 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.968486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.968644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.968665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.968687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:24 crc kubenswrapper[4796]: I1212 04:34:24.968703 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:24Z","lastTransitionTime":"2025-12-12T04:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.071713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.071762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.071779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.071803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.071821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.177175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.177217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.177229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.177244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.177255 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.280229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.280356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.280381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.280407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.280423 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.382720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.382771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.382783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.382802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.382814 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.410747 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.410871 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:25 crc kubenswrapper[4796]: E1212 04:34:25.410881 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:25 crc kubenswrapper[4796]: E1212 04:34:25.411104 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.486100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.486156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.486167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.486183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.486194 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.588840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.588880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.588896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.588952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.588963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.692346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.692400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.692419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.692442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.692458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.795166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.795207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.795218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.795259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.795292 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.898230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.898273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.898322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.898336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:25 crc kubenswrapper[4796]: I1212 04:34:25.898345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:25Z","lastTransitionTime":"2025-12-12T04:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.001889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.001966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.001982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.002002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.002016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.106246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.106361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.106382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.106404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.106427 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.208608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.208655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.208666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.208682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.208693 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.311143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.311202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.311219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.311242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.311258 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.410693 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:26 crc kubenswrapper[4796]: E1212 04:34:26.410835 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.410715 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:26 crc kubenswrapper[4796]: E1212 04:34:26.411448 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.413560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.413592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.413603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.413616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.413627 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.517028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.517097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.517119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.517172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.517195 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.619941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.620015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.620033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.620056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.620074 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.723871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.724136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.724296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.724412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.724484 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.826612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.826660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.826678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.826695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.826706 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.929138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.929398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.929510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.929585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:26 crc kubenswrapper[4796]: I1212 04:34:26.929651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:26Z","lastTransitionTime":"2025-12-12T04:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.031486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.031720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.031812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.031899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.031991 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.133612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.133809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.133911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.133994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.134074 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.235698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.235753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.235773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.235793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.235807 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.338116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.339025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.339177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.339333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.339472 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.410889 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:27 crc kubenswrapper[4796]: E1212 04:34:27.411349 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.411380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:27 crc kubenswrapper[4796]: E1212 04:34:27.411643 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.441562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.441625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.441642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.441668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.441696 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.545004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.545461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.545598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.545764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.545888 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.649211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.649257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.649274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.649345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.649369 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.751689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.751732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.751746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.751764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.751777 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.854527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.854596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.854618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.854646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.854667 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.958083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.958137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.958154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.958178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:27 crc kubenswrapper[4796]: I1212 04:34:27.958195 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:27Z","lastTransitionTime":"2025-12-12T04:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.060947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.061013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.061029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.061054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.061073 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.163155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.163219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.163240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.163269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.163324 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.265838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.266247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.266394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.266493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.266603 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.369195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.369231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.369242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.369258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.369269 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.411235 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.411394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:28 crc kubenswrapper[4796]: E1212 04:34:28.411495 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:28 crc kubenswrapper[4796]: E1212 04:34:28.411578 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.472093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.472454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.472549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.472643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.472764 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.576158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.576233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.576263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.576333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.576359 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.679572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.679928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.680098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.680274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.680524 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.783193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.783641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.783854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.784605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.784834 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.887984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.888032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.888048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.888071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.888087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.990090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.990124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.990135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.990148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:28 crc kubenswrapper[4796]: I1212 04:34:28.990157 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:28Z","lastTransitionTime":"2025-12-12T04:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.093245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.093293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.093302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.093318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.093327 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.196781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.196817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.196826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.196840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.196851 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.299138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.299173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.299185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.299199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.299208 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.402737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.402796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.402820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.402850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.402870 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.410517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:29 crc kubenswrapper[4796]: E1212 04:34:29.410630 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.410739 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:29 crc kubenswrapper[4796]: E1212 04:34:29.412945 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.432901 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.446268 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.467909 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.486063 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.506409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.506439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.506449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.506466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.506477 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.506965 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.522598 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.542456 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.563103 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.580474 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.602103 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.608786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.608831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.608862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.608880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.608891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.615805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.638744 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.650732 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.665863 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.683454 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.696383 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.709002 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.710961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.710991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.711002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.711018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.711030 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.718548 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:29Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.814058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.814115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.814126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.814143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.814155 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.916783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.916814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.916821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.916835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:29 crc kubenswrapper[4796]: I1212 04:34:29.916844 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:29Z","lastTransitionTime":"2025-12-12T04:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.020489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.020521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.020530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.020542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.020550 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.124366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.124455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.124474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.124500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.124524 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.226932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.226983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.226995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.227014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.227026 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.329349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.329382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.329391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.329403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.329412 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.410626 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.410692 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:30 crc kubenswrapper[4796]: E1212 04:34:30.410767 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:30 crc kubenswrapper[4796]: E1212 04:34:30.410904 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.432521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.432581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.432596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.432617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.432634 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.534702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.534744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.534755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.534772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.534783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.637036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.637116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.637134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.637623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.637693 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.740600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.740633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.740644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.740658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.740669 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.844152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.844232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.844244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.844260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.844318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.947238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.947325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.947350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.947373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:30 crc kubenswrapper[4796]: I1212 04:34:30.947389 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:30Z","lastTransitionTime":"2025-12-12T04:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.050211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.050258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.050269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.050301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.050315 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.153955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.154082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.154104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.154128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.154151 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.260020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.260060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.260072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.260095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.260107 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.363102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.363146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.363162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.363184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.363200 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.415139 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:31 crc kubenswrapper[4796]: E1212 04:34:31.415343 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.415620 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:31 crc kubenswrapper[4796]: E1212 04:34:31.415738 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.466602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.466659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.466679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.466702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.466719 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.569228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.569332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.569353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.569375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.569392 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.671686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.671718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.671726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.671739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.671747 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.774736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.774812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.774834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.774861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.774880 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.877850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.877890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.877899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.877915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.877926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.980993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.981061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.981082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.981108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.981130 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.998350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.998383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.998394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.998410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:31 crc kubenswrapper[4796]: I1212 04:34:31.998423 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:31Z","lastTransitionTime":"2025-12-12T04:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.018199 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:32Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.023104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.023137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.023149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.023165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.023175 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.037921 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:32Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.042338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.042438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.042464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.042537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.042561 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.062529 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:32Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.065511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.065622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.065689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.065752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.065809 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.078981 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:32Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.083905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.084159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.084414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.084602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.084776 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.097978 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:32Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.098096 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.100188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.100219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.100228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.100248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.100260 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.203830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.204169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.204352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.204540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.204670 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.307771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.308147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.308177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.308205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.308222 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.410233 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.410338 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.410473 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:32 crc kubenswrapper[4796]: E1212 04:34:32.410649 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.412048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.412093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.412110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.412131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.412150 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.515403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.516530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.516581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.516615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.516638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.619140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.619167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.619175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.619187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.619196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.721827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.721870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.721880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.722093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.722110 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.824309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.824340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.824348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.824360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.824369 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.926409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.926469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.926485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.926506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:32 crc kubenswrapper[4796]: I1212 04:34:32.926520 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:32Z","lastTransitionTime":"2025-12-12T04:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.029555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.029607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.029625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.029648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.029668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.132646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.132693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.132712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.132737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.132780 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.234637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.234666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.234675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.234688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.234696 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.338364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.338423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.338447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.338474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.338494 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.411534 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.411566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:33 crc kubenswrapper[4796]: E1212 04:34:33.411702 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:33 crc kubenswrapper[4796]: E1212 04:34:33.411798 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.441344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.441378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.441389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.441407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.441419 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.543716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.544027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.544038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.544053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.544065 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.645843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.645879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.645897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.645913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.645921 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.748629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.748655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.748663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.748674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.748683 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.851140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.851182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.851193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.851207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.851216 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.953575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.953608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.953616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.953629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:33 crc kubenswrapper[4796]: I1212 04:34:33.953638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:33Z","lastTransitionTime":"2025-12-12T04:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.056491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.056536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.056548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.056564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.056577 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.159250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.159311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.159322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.159336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.159345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.261606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.261640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.261648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.261660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.261672 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.364172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.364196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.364204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.364217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.364226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.411050 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.411158 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:34 crc kubenswrapper[4796]: E1212 04:34:34.411187 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:34 crc kubenswrapper[4796]: E1212 04:34:34.411433 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.468428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.468466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.468475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.468492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.468502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.570494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.570534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.570543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.570556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.570566 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.673232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.673262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.673271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.673301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.673313 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.776262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.776307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.776316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.776329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.776339 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.878503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.878548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.878559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.878580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.878596 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.980438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.980478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.980486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.980502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:34 crc kubenswrapper[4796]: I1212 04:34:34.980510 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:34Z","lastTransitionTime":"2025-12-12T04:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.083299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.083340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.083349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.083363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.083372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.185928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.185976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.185990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.186007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.186019 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.288861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.288921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.288939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.288973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.289027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.391589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.391628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.391638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.391651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.391661 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.410843 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.410902 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:35 crc kubenswrapper[4796]: E1212 04:34:35.411045 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:35 crc kubenswrapper[4796]: E1212 04:34:35.411583 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.411861 4796 scope.go:117] "RemoveContainer" containerID="36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8" Dec 12 04:34:35 crc kubenswrapper[4796]: E1212 04:34:35.412188 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.493899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.493935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.493943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.493956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.493966 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.596037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.596073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.596082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.596096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.596106 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.698400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.698453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.698465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.698487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.698498 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.800628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.800660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.800669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.800681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.800689 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.902310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.902343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.902352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.902364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:35 crc kubenswrapper[4796]: I1212 04:34:35.902372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:35Z","lastTransitionTime":"2025-12-12T04:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.004663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.004721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.004739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.004762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.004779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.107031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.107092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.107111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.107133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.107150 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.209778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.209826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.209844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.209865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.209880 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.312401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.312442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.312478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.312502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.312516 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.410626 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.410627 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:36 crc kubenswrapper[4796]: E1212 04:34:36.410796 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:36 crc kubenswrapper[4796]: E1212 04:34:36.410919 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.415713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.415744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.415752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.415765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.415774 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.518619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.518667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.518680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.518696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.518705 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.621477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.621517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.621524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.621538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.621547 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.723956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.723986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.723996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.724008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.724017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.825885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.825922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.825931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.825948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.825958 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.928413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.928449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.928457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.928472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:36 crc kubenswrapper[4796]: I1212 04:34:36.928481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:36Z","lastTransitionTime":"2025-12-12T04:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.030233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.030276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.030326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.030342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.030353 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.132345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.132384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.132392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.132409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.132420 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.234268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.234331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.234342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.234358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.234368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.336376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.336411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.336419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.336432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.336441 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.411348 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:37 crc kubenswrapper[4796]: E1212 04:34:37.411502 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.411734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:37 crc kubenswrapper[4796]: E1212 04:34:37.412023 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.438881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.438942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.438963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.438989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.439007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.540851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.540884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.540893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.540907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.540916 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.644088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.644154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.644176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.644205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.644227 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.746751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.746784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.746793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.746807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.746817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.849778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.849814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.849825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.849841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.849853 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.951796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.951841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.951852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.951870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:37 crc kubenswrapper[4796]: I1212 04:34:37.951882 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:37Z","lastTransitionTime":"2025-12-12T04:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.054304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.054338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.054349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.054385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.054399 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.153039 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:38 crc kubenswrapper[4796]: E1212 04:34:38.153194 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:38 crc kubenswrapper[4796]: E1212 04:34:38.153241 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:35:10.153227713 +0000 UTC m=+101.029244860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.156357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.156380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.156389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.156403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.156413 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.258679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.258714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.258722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.258739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.258748 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.361190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.361240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.361252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.361269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.361616 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.410968 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:38 crc kubenswrapper[4796]: E1212 04:34:38.411214 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.411593 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:38 crc kubenswrapper[4796]: E1212 04:34:38.411764 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.464139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.464186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.464201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.464217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.464227 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.566854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.566893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.566901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.566916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.566925 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.669100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.669150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.669163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.669179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.669190 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.772294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.772332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.772344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.772360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.772371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.863162 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/0.log" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.863215 4796 generic.go:334] "Generic (PLEG): container finished" podID="55b96fce-0e56-40cb-ab90-873a8421260b" containerID="3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98" exitCode=1 Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.863245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerDied","Data":"3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.863632 4796 scope.go:117] "RemoveContainer" containerID="3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.878199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.878298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.878317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.878376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.878395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.881941 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.902015 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.910702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.921085 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.930647 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.943100 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.954820 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.964853 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.976216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.979863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.980016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.980099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.980170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.980245 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:38Z","lastTransitionTime":"2025-12-12T04:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.988101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:38 crc kubenswrapper[4796]: I1212 04:34:38.999080 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:38Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.009553 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.022118 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.032692 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.048906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.058998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.067229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.082217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.082255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.082264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.082289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.082298 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.082390 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.184576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.184617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.184626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.184639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.184647 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.286505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.286545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.286553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.286565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.286573 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.388716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.388768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.388777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.388791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.388817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.411133 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:39 crc kubenswrapper[4796]: E1212 04:34:39.411247 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.411144 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:39 crc kubenswrapper[4796]: E1212 04:34:39.411363 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.423507 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.432944 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.445248 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.455963 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.466184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.475240 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.486524 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.490753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.490786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.490794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.490808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.490819 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.501228 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.512150 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.527569 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.545833 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.566551 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.580597 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.592992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.593240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.593327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.593411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.593495 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.595509 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.625906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.658834 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.689210 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.695832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.695877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.695886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.695903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.695912 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.704447 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.797743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.797766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.797774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.797802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.797811 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.867035 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/0.log" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.867075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerStarted","Data":"503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.885004 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.900538 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.900603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.901126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.901368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.901494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.901565 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:39Z","lastTransitionTime":"2025-12-12T04:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.912852 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.928671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.941556 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.956588 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.968320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.980145 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:39 crc kubenswrapper[4796]: I1212 04:34:39.992977 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:39Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.003950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.003990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.004001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.004016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.004029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.007139 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.018970 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.030844 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.041187 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.050693 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.066994 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.078892 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.088663 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.105597 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:40Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.106540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.106565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.106585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.106597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.106606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.208896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.208928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.208945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.208962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.208971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.311140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.311395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.311485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.311613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.311696 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.410900 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.410976 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:40 crc kubenswrapper[4796]: E1212 04:34:40.411013 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:40 crc kubenswrapper[4796]: E1212 04:34:40.411127 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.414193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.414323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.414429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.414526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.414606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.516931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.517196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.517311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.517428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.517520 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.619533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.619569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.619579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.619595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.619606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.722769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.723024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.723047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.723077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.723098 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.825200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.825230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.825239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.825252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.825260 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.927535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.927569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.927580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.927595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:40 crc kubenswrapper[4796]: I1212 04:34:40.927608 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:40Z","lastTransitionTime":"2025-12-12T04:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.030158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.030208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.030225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.030248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.030265 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.134262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.134320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.134332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.134348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.134359 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.236389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.236423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.236433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.236447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.236457 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.338854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.338901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.338915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.338936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.338988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.410636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.410665 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:41 crc kubenswrapper[4796]: E1212 04:34:41.410799 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:41 crc kubenswrapper[4796]: E1212 04:34:41.410903 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.442448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.442484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.442517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.442533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.442544 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.550076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.550338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.550409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.550472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.550525 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.653367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.653591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.653694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.653763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.653829 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.756757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.756798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.756809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.756828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.756840 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.858962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.859000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.859011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.859030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.859043 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.961090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.961121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.961129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.961141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:41 crc kubenswrapper[4796]: I1212 04:34:41.961150 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:41Z","lastTransitionTime":"2025-12-12T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.063140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.063187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.063202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.063223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.063239 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.166783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.166822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.166831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.166846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.166855 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.268790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.268848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.268859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.268874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.268885 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.338368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.338412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.338430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.338473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.338491 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.357648 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:42Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.363493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.363520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.363532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.363549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.363562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.382475 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:42Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.386687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.386768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.386779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.386791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.386800 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.399117 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:42Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.403136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.403214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.403234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.403263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.403311 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.410658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.410805 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.410877 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.411020 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.418235 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:42Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.422222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.422309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.422334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.422366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.422389 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.436473 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:42Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:42 crc kubenswrapper[4796]: E1212 04:34:42.436669 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.438708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.438743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.438757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.438774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.438786 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.542255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.542387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.542411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.542821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.543084 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.646231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.646319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.646339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.646362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.646378 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.748490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.748536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.748553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.748574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.748591 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.851864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.851909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.851929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.851952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.851968 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.954224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.954321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.954342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.954371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:42 crc kubenswrapper[4796]: I1212 04:34:42.954389 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:42Z","lastTransitionTime":"2025-12-12T04:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.056477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.056506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.056520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.056535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.056546 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.158854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.158905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.158925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.158952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.158973 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.261561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.261627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.261646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.261671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.261687 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.364141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.364198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.364216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.364251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.364269 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.415311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:43 crc kubenswrapper[4796]: E1212 04:34:43.415474 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.415764 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:43 crc kubenswrapper[4796]: E1212 04:34:43.415892 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.467478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.467522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.467541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.467566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.467584 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.571443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.571490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.571506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.571529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.571545 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.674744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.674796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.674815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.674845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.674868 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.779126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.779189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.779207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.779639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.779692 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.882464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.882497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.882508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.882523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.882535 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.984939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.984977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.984987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.985003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:43 crc kubenswrapper[4796]: I1212 04:34:43.985016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:43Z","lastTransitionTime":"2025-12-12T04:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.087044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.087096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.087118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.087136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.087148 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.189545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.189597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.189605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.189617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.189626 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.292831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.292881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.292894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.292916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.292930 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.395918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.395982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.396003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.396026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.396043 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.411254 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.411390 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:44 crc kubenswrapper[4796]: E1212 04:34:44.411488 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:44 crc kubenswrapper[4796]: E1212 04:34:44.411603 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.497996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.498030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.498039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.498070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.498080 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.600325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.600353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.600361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.600374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.600383 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.702960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.702988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.702997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.703020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.703029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.806130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.806197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.806214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.806239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.806256 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.909711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.909769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.909786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.909810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:44 crc kubenswrapper[4796]: I1212 04:34:44.909826 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:44Z","lastTransitionTime":"2025-12-12T04:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.012860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.012978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.013342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.013658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.013963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.116949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.117015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.117041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.117070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.117092 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.219893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.219935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.219946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.219964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.219983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.323679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.323733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.323750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.323776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.323794 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.411380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.411449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:45 crc kubenswrapper[4796]: E1212 04:34:45.411583 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:45 crc kubenswrapper[4796]: E1212 04:34:45.411731 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.426198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.426249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.426266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.426317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.426345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.529292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.529327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.529336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.529348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.529358 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.632414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.632477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.632495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.632519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.632537 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.735630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.735705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.735731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.735764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.735832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.837595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.837634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.837648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.837663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.837673 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.940464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.940518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.940538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.940567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:45 crc kubenswrapper[4796]: I1212 04:34:45.940589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:45Z","lastTransitionTime":"2025-12-12T04:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.043215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.043270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.043323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.043347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.043364 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.146347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.146417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.146442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.146471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.146495 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.248792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.248836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.248846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.248861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.248874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.352357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.352419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.352431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.352449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.352461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.411072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.411667 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:46 crc kubenswrapper[4796]: E1212 04:34:46.411908 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:46 crc kubenswrapper[4796]: E1212 04:34:46.412366 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.455630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.455669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.455681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.455697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.455714 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.558105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.558156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.558174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.558198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.558214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.661402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.661453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.661464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.661481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.661495 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.765055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.765116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.765136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.765160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.765179 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.867273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.867763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.868017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.868345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.868623 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.972548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.973199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.973356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.973470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:46 crc kubenswrapper[4796]: I1212 04:34:46.973587 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:46Z","lastTransitionTime":"2025-12-12T04:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.077346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.077447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.077466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.077491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.077537 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.180671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.180711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.180723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.180739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.180751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.282731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.282766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.282774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.282787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.282795 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.384666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.384696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.384704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.384716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.384726 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.410419 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:47 crc kubenswrapper[4796]: E1212 04:34:47.410580 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.411042 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:47 crc kubenswrapper[4796]: E1212 04:34:47.411145 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.411791 4796 scope.go:117] "RemoveContainer" containerID="36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.487798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.487868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.487883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.487898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.487909 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.591527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.591601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.591631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.591662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.591693 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.694043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.694116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.694146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.694175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.694198 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.796700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.796743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.796755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.796772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.796784 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.894362 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/2.log" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.896540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.897396 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.900513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.900569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.900591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.900611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.900624 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:47Z","lastTransitionTime":"2025-12-12T04:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.912857 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:47Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.927717 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:47Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.942237 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:47Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.958204 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:47Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.973222 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:47Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:47 crc kubenswrapper[4796]: I1212 04:34:47.988353 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:47Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.003149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.003402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.003564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.003655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.003736 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.003500 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.017599 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.030450 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.044576 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.059722 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.074778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.086588 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.120657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.120803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.120945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.121018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.121191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.138183 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.166944 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.179199 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.191233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.205082 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.223268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.223325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.223334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.223350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.223360 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.325805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.325836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.325845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.325863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.325873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.411030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:48 crc kubenswrapper[4796]: E1212 04:34:48.411152 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.411372 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:48 crc kubenswrapper[4796]: E1212 04:34:48.411585 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.428675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.428712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.428720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.428736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.428745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.531702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.531763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.531784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.531807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.531823 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.635095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.635134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.635145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.635163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.635176 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.738438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.738520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.738546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.738623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.738649 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.842183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.842242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.842264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.842341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.842367 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.903551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/3.log" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.904875 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/2.log" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.909581 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" exitCode=1 Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.909637 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.909685 4796 scope.go:117] "RemoveContainer" containerID="36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.910945 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:34:48 crc kubenswrapper[4796]: E1212 04:34:48.911264 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.947250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.947342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.947365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.947428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.947448 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:48Z","lastTransitionTime":"2025-12-12T04:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.948389 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.966244 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:48 crc kubenswrapper[4796]: I1212 04:34:48.983448 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.001942 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:48Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.015554 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.037857 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.052128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.052193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.052211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.052261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.052318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.066332 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.087979 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.103482 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.120150 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.139228 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.153776 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.155473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.155519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.155537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.155560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.155577 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.184810 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:48Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1212 04:34:48.493316 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.220087 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.238237 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.252163 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.257621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.257664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.257678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.257694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.257706 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.268214 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.282346 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.361113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.361156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.361166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.361182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.361193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.410942 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.411091 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:49 crc kubenswrapper[4796]: E1212 04:34:49.411354 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:49 crc kubenswrapper[4796]: E1212 04:34:49.411651 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.425007 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.441689 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.453589 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.463374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.463433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.463449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.463473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.463490 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.469482 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.478796 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.494768 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.509818 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.521892 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.532516 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.543373 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.554839 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.565273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.565316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.565323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.565336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.565344 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.568892 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.582214 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.595779 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.615997 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.629910 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.640182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.662008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36aae43b8f92874142026882135b564c6e44fc8f89b5f8f8c8f27232e8f858b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"message\\\":\\\"address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1212 04:34:19.258820 6330 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1212 04:34:19.258899 6330 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:19Z is after 2025-08-24T17:21:41Z]\\\\nI1212 04:34:19.258896 6330 obj_retry.go:409] Going to retry *v1.Pod resource setup for 16 objects: [openshift-dns/node-resolver-xksvx openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-image-registry/node-ca-bs8p8 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:48Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1212 04:34:48.493316 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.669795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.669828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.669839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.669854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.669864 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.772726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.772800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.772822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.772851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.772873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.876385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.876442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.876459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.876482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.876498 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.916412 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/3.log" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.922380 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:34:49 crc kubenswrapper[4796]: E1212 04:34:49.922669 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.939986 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.962637 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.982038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.982107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.982124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.982150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.982171 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:49Z","lastTransitionTime":"2025-12-12T04:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:49 crc kubenswrapper[4796]: I1212 04:34:49.984661 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:49Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.008458 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.024838 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.042847 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.056721 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.074080 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.086988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.087021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.087035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.087054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.087070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.089707 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.102764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.116042 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.126352 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.135789 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.145153 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.155813 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.171909 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:48Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1212 04:34:48.493316 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.190083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.190117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.190128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.190143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.190157 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.190339 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.201259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:50Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.292523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.292546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.292554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.292568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.292576 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.397752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.397815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.397831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.397855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.397876 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.410380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:50 crc kubenswrapper[4796]: E1212 04:34:50.410548 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.410823 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:50 crc kubenswrapper[4796]: E1212 04:34:50.410899 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.500203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.500227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.500234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.500247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.500257 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.602070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.602095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.602103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.602115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.602123 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.705091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.705138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.705155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.705179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.705196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.808778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.808851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.808874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.808908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.808932 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.912441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.912502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.912525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.912553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:50 crc kubenswrapper[4796]: I1212 04:34:50.912571 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:50Z","lastTransitionTime":"2025-12-12T04:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.015433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.015486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.015511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.015540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.015560 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.118830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.118887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.118903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.118925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.118946 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.221917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.221966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.221983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.222006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.222023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.288713 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.288872 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.288843008 +0000 UTC m=+146.164860195 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.325562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.325614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.325636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.325668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.325690 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.389906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.389990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.390057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.390107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390198 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390343 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390368 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390452 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390410 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390771 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.390631 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.391027 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.391149 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.390359647 +0000 UTC m=+146.266376844 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.391204 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.391174312 +0000 UTC m=+146.267191499 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.391228 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.391216513 +0000 UTC m=+146.267233700 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.391568 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.391525563 +0000 UTC m=+146.267542780 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.411174 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.411185 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.411416 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:51 crc kubenswrapper[4796]: E1212 04:34:51.411598 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.428092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.428158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.428175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.428201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.428219 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.531650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.531705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.531718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.531735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.531745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.634999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.635083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.635107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.635137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.635162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.738319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.738404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.738429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.738462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.738488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.841758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.841827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.841844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.841875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.841892 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.944545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.944595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.944612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.944637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:51 crc kubenswrapper[4796]: I1212 04:34:51.944654 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:51Z","lastTransitionTime":"2025-12-12T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.048239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.048343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.048365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.048395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.048418 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.151317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.151395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.151419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.151450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.151473 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.253918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.253960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.253968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.253985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.253994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.358155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.358723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.358767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.358786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.358797 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.411272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.411372 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.411555 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.411876 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.462115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.462203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.462236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.462266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.462500 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.565136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.565193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.565210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.565234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.565251 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.668314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.668371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.668391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.668414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.668433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.669725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.669765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.669781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.669802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.669819 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.687406 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.692545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.692623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.692643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.692670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.692687 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.715686 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.726674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.726730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.726748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.726770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.726785 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.746853 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.751724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.751780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.751798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.751822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.751841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.768635 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.773429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.773477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.773490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.773510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.773525 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.791739 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"29962b12-6b98-48df-a7ea-a35f82b5869e\\\",\\\"systemUUID\\\":\\\"6e5a9b12-f1e8-4509-b376-8d2a837dae47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:52Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:52 crc kubenswrapper[4796]: E1212 04:34:52.792185 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.794351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.794429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.794454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.794484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.794506 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.897569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.897632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.897655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.897681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:52 crc kubenswrapper[4796]: I1212 04:34:52.897698 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:52Z","lastTransitionTime":"2025-12-12T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.000693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.000756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.000779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.000807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.000828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.103964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.103999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.104011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.104028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.104041 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.207791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.207846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.207863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.207887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.207904 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.310883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.310953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.311041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.311075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.311160 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.411329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:53 crc kubenswrapper[4796]: E1212 04:34:53.411579 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.411653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:53 crc kubenswrapper[4796]: E1212 04:34:53.411808 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.413833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.414059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.414238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.414482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.414634 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.516678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.516975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.517069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.517197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.517329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.619921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.619978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.619989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.620009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.620023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.723516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.723874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.724217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.724467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.724668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.827900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.828267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.828408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.828544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.828668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.932926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.933036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.933108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.933181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:53 crc kubenswrapper[4796]: I1212 04:34:53.933214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:53Z","lastTransitionTime":"2025-12-12T04:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.036953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.037010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.037027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.037051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.037068 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.140266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.140383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.140409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.140438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.140459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.243417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.243472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.243487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.243506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.243521 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.346160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.346219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.346233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.346252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.346266 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.411209 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.411255 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:54 crc kubenswrapper[4796]: E1212 04:34:54.411363 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:54 crc kubenswrapper[4796]: E1212 04:34:54.411571 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.449542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.449586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.449627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.449645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.449656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.552631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.552673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.552690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.552711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.552727 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.656010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.656057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.656090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.656109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.656121 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.759813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.759883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.759906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.759935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.759958 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.863726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.864147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.864213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.864371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.864445 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.966956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.967015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.967035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.967061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:54 crc kubenswrapper[4796]: I1212 04:34:54.967080 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:54Z","lastTransitionTime":"2025-12-12T04:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.070709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.071127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.071223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.071334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.071429 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.174510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.174565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.174583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.174607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.174624 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.278020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.278090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.278112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.278140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.278165 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.380753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.380779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.380787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.380800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.380809 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.410262 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.410378 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:55 crc kubenswrapper[4796]: E1212 04:34:55.410460 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:55 crc kubenswrapper[4796]: E1212 04:34:55.410554 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.483377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.483448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.483472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.483499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.483517 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.586911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.586971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.586987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.587010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.587029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.689651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.689729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.689752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.689782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.689804 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.793213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.793352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.793643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.793988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.794060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.896873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.897824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.897879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.897915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:55 crc kubenswrapper[4796]: I1212 04:34:55.897942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:55Z","lastTransitionTime":"2025-12-12T04:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.000389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.000441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.000452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.000466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.000475 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.102926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.102965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.102976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.102993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.103004 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.206482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.206549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.206571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.206601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.206620 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.309932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.309997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.310038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.310074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.310096 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.411363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.411487 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:56 crc kubenswrapper[4796]: E1212 04:34:56.411568 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:56 crc kubenswrapper[4796]: E1212 04:34:56.411643 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.412829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.412874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.412892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.412913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.412931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.516061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.516114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.516130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.516153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.516169 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.619938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.619995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.620017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.620043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.620061 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.722786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.722852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.722864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.722889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.722901 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.825929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.826151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.826167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.826186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.826199 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.932348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.932399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.932417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.932441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:56 crc kubenswrapper[4796]: I1212 04:34:56.932459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:56Z","lastTransitionTime":"2025-12-12T04:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.035789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.035869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.035904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.035934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.035954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.138870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.138929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.138946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.138968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.138983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.241960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.242022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.242046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.242073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.242095 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.357707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.357781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.357806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.357835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.357857 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.411020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.411082 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:57 crc kubenswrapper[4796]: E1212 04:34:57.411262 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:57 crc kubenswrapper[4796]: E1212 04:34:57.411431 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.462146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.462214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.462238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.462316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.462342 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.565492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.565631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.565651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.565676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.565693 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.668591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.668672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.668695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.668723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.668745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.772274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.772363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.772381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.772408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.772428 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.875860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.875957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.875980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.876005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.876023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.978798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.978856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.978876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.978900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:57 crc kubenswrapper[4796]: I1212 04:34:57.978918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:57Z","lastTransitionTime":"2025-12-12T04:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.081932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.081985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.081996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.082015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.082028 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.185204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.185323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.185349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.185384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.185406 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.287760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.287807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.287823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.287845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.287962 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.390225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.390272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.390315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.390338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.390355 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.411155 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.411252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:34:58 crc kubenswrapper[4796]: E1212 04:34:58.411448 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:34:58 crc kubenswrapper[4796]: E1212 04:34:58.411600 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.493547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.493602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.493621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.493647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.493669 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.596609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.596679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.596702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.596730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.596751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.699922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.699995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.700017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.700047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.700070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.802990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.803042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.803064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.803091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.803109 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.905717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.905808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.905828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.905858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:58 crc kubenswrapper[4796]: I1212 04:34:58.905877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:58Z","lastTransitionTime":"2025-12-12T04:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.008630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.008674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.008688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.008707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.008723 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.111831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.111904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.111931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.111964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.112003 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.215460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.215501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.215511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.215527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.215541 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.318564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.318630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.318651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.318676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.318694 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.411199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.411864 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:34:59 crc kubenswrapper[4796]: E1212 04:34:59.412493 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:34:59 crc kubenswrapper[4796]: E1212 04:34:59.412650 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.423516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.423584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.423609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.423640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.423661 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.430356 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.435184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdee9958-1438-462c-b4d5-e5d7ba66483b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf4938509c7a9519a08f1283461fc39b2e0c1957460ef51eab82f3bafc08830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc4dc8bc921b2fce1ead16545a698b5577952af38eecb1fa40d6edc2a031517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlbc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h7crl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.454779 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c71b2390-9b85-49be-b7bf-8badbed175af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8adb6b7627df8c02487acc0966a10b1d004c434f2421cafba5afb711e8b0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d7e26b09e18cb5012e0bd4b0b51f7970c9057ec43d11aa4e6482be3c2315b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e234b1253f8441ac5a9bad32b4605f745ebd1bc1002a754288ea71252b389b4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.467520 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.484176 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9be342e718ccfd4532a3bfa975e07c85dd75e78d14e9b132b84945e8f4d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b13958381143906d480bf4f80b03a4055f52a02a6ba3e5039a71f9c9ee969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.506702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5919feb740fcb779c527b7af6a1b3a3e95b29ee0bd4b8742cddf9d36753fb73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.525991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.526052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.526068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.526090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.526105 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.532173 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.545597 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bs8p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d20be8d8-badc-477e-92e1-6c4be36a08fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba4b321362d5262e505cd0580187684fe32122791ffc645a755c51e88b4b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pw9nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bs8p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.577201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7337651b-cae2-43e5-9ef1-8c4aa402392c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbd6336a922d9b084b7b415841a5f83d28df2aca26ba076f94955cef6fb037d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://005423dc0dcb84bf4ec21763270fff9be17e3d8601bb9154379315e97ef53428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98df4166fa0ee219e48a2f0c2e737a9f9a0441bb8769248c691dab324bdd88ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681851f428afe81cdd9cbd54853c1a1605d40f3ef614cb064e1b7c80cd854034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18e878424bd4760482136c182d6da91d6421dda16f3c1c4652cc860c82f901e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16564aee1be7c5cd82f83439296c7f6270952b1b102411a561500fa0e5f3b7a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6afaf6fba98ec2f5234d23fb3287b37e07babeece203402aaf42062ebb593a6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37a9c744d6a785c174eb9f4c6700aec7476617a0c3b39db1c68631621e9b1d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.595729 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee01ea9-0d4f-48aa-9d93-04d2e3bd2b2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e6f6f233fa96721ef397f4e6cc31cd8b45d2583a9953f9c7b2ab0ef3d4b5f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8be12b2f677a526924ed2c5a52be35e80126ac147a2a63bb780a4d855d89867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23dc2d16331a18905d80cec57f3b04aeb2a3d0ffb628b6e583b1e8329700a751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98995c9514c60762e557bd9cb953b5e264af4dd1293b5c7cdd5a711a9522e11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.609714 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xksvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f25c3eac-cf85-400e-be55-e093858a48be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9d293ae1113d2a91e30eacea7855e276d93c213aa49189563b553cc34c0d9f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spqq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xksvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.629256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.629543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.629625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.629692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.629766 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.639164 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439475ac-7f06-4a47-9a81-9f4cf4083c38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:48Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1212 04:34:48.493316 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:34:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfckw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-996v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.657617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00ebf115-3809-461f-96eb-11c9989cec7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1212 04:33:41.844789 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1212 04:33:41.845763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2713872075/tls.crt::/tmp/serving-cert-2713872075/tls.key\\\\\\\"\\\\nI1212 04:33:47.291494 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 04:33:47.294730 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 04:33:47.294754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 04:33:47.294779 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 04:33:47.294800 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 04:33:47.301814 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 04:33:47.301842 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 04:33:47.301853 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 04:33:47.301857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 04:33:47.301862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 04:33:47.301866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 04:33:47.301907 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 04:33:47.306570 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.678844 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b68x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55b96fce-0e56-40cb-ab90-873a8421260b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-12T04:34:38Z\\\",\\\"message\\\":\\\"2025-12-12T04:33:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f\\\\n2025-12-12T04:33:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1f93951-de95-452c-8661-6b7ec86cdc8f to /host/opt/cni/bin/\\\\n2025-12-12T04:33:53Z [verbose] multus-daemon started\\\\n2025-12-12T04:33:53Z [verbose] Readiness Indicator file check\\\\n2025-12-12T04:34:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4ldq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b68x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.690519 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a81191a1-393c-400c-9b7d-6748c4a8fb36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:34:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t75tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:34:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ftpgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.703787 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.720523 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176a51d2415f2995e30f6c4ddb8d438ac88443612de639f17ab2754ae34b8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.732934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.733257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.733432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.733689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.733942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.742587 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5zck7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b7537ef-8ad8-4901-a2db-1881d2754684\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f00d6269c083d9f5f6251ed00b4d08ec0361f5d06aa3eaf0a1a4b21aeeb122a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f927346b78f9ff547344b49a965d20ac40baee52c0273bc51b8443f30ec516c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc855574462c2b46e099616b49ca89ab5f552553585a69868a3501c96c03c41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c40f918e5138a2f32f1518c3d7ef4602b115dc4ab451cb0206265e1bfe62c266\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e745c46842d103b22e5788a5f4124b2ff428d3e6c217d8cc7d74026b0838edab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bff903dd6b8bec14e44637cb92fbd664836b1c84e41efa21986e176589d33986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a91809490e273b186c015d377845868562c67aed619765db53c37c9a02850f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T04:33:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T04:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pp4xz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5zck7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.756364 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42576c28eb0d83633ba3458d1523c1a3a321e0b7f23968a6cb2b3d873b805fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T04:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trdtr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T04:33:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxh7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-12T04:34:59Z is after 2025-08-24T17:21:41Z" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.836895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.836966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.836984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.837013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.837030 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.939782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.939837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.939855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.939876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:34:59 crc kubenswrapper[4796]: I1212 04:34:59.939891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:34:59Z","lastTransitionTime":"2025-12-12T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.042378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.042425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.042438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.042457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.042468 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.144720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.144770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.144778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.144791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.144815 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.247679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.247737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.247759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.247780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.247791 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.351233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.351349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.351372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.351401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.351419 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.411417 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.411437 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:00 crc kubenswrapper[4796]: E1212 04:35:00.411681 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:00 crc kubenswrapper[4796]: E1212 04:35:00.411837 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.454184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.454226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.454237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.454252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.454266 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.556792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.556851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.556869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.556927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.556945 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.660243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.660320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.660334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.660353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.660366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.763449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.763511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.763531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.763553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.763570 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.866256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.866373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.866393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.866418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.866436 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.969045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.969110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.969132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.969161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:00 crc kubenswrapper[4796]: I1212 04:35:00.969183 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:00Z","lastTransitionTime":"2025-12-12T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.071752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.071868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.071937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.072031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.072060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.175478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.175531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.175542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.175558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.175573 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.278374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.278440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.278465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.278497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.278525 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.380541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.380590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.380601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.380810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.380822 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.411421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.411675 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:01 crc kubenswrapper[4796]: E1212 04:35:01.411983 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:01 crc kubenswrapper[4796]: E1212 04:35:01.411677 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.484457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.484542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.484572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.484598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.484616 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.587783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.587876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.587902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.587973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.588066 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.690631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.690706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.690724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.690748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.690767 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.793792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.793886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.793906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.793997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.794052 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.897499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.897563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.897585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.897617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:01 crc kubenswrapper[4796]: I1212 04:35:01.897638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:01Z","lastTransitionTime":"2025-12-12T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.000228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.000331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.000351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.000376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.000395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.102482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.102547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.102572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.102602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.102624 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.205829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.205901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.205925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.205955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.205977 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.309537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.309625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.309649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.309684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.309710 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.410953 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:02 crc kubenswrapper[4796]: E1212 04:35:02.411436 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.412061 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:02 crc kubenswrapper[4796]: E1212 04:35:02.412610 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.412871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.412908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.412927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.412948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.412966 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.516676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.516737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.516758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.516785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.516802 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.620367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.620411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.620427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.620449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.620468 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.723941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.724469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.724734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.724937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.725133 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.828240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.828678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.828842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.828980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.829128 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.883324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.883383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.883401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.883423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.883439 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T04:35:02Z","lastTransitionTime":"2025-12-12T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.953059 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt"] Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.953848 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.957372 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.957525 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.959964 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.960344 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.982345 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.98231528 podStartE2EDuration="43.98231528s" podCreationTimestamp="2025-12-12 04:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:02.982192496 +0000 UTC m=+93.858209663" watchObservedRunningTime="2025-12-12 04:35:02.98231528 +0000 UTC m=+93.858332477" Dec 12 04:35:02 crc kubenswrapper[4796]: I1212 04:35:02.998631 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xksvx" podStartSLOduration=71.998610618 podStartE2EDuration="1m11.998610618s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:02.998455193 +0000 UTC m=+93.874472380" watchObservedRunningTime="2025-12-12 04:35:02.998610618 +0000 UTC m=+93.874627775" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.020111 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.020162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.020198 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.020231 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.020251 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-service-ca\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.069751 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=74.069728798 podStartE2EDuration="1m14.069728798s" podCreationTimestamp="2025-12-12 04:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.068702056 +0000 UTC m=+93.944719243" watchObservedRunningTime="2025-12-12 04:35:03.069728798 +0000 UTC m=+93.945745995" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.106671 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b68x4" podStartSLOduration=72.106651011 podStartE2EDuration="1m12.106651011s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.089036811 +0000 UTC m=+93.965054008" watchObservedRunningTime="2025-12-12 04:35:03.106651011 +0000 UTC m=+93.982668158" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.121272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.121390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.121446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.121497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.121538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-service-ca\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.123232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-service-ca\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.123805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.124405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.142091 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.152576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5eba0c-e775-4f72-8eb3-7e799fdf0590-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-twqqt\" (UID: \"9c5eba0c-e775-4f72-8eb3-7e799fdf0590\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.183100 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.183079206 podStartE2EDuration="1m16.183079206s" podCreationTimestamp="2025-12-12 04:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.169333857 +0000 UTC m=+94.045351044" watchObservedRunningTime="2025-12-12 04:35:03.183079206 +0000 UTC m=+94.059096363" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.227564 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5zck7" podStartSLOduration=72.227528483 podStartE2EDuration="1m12.227528483s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.204384492 +0000 UTC m=+94.080401679" watchObservedRunningTime="2025-12-12 04:35:03.227528483 +0000 UTC m=+94.103545670" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.229510 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podStartSLOduration=72.229493965 podStartE2EDuration="1m12.229493965s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.227703299 +0000 UTC m=+94.103720516" watchObservedRunningTime="2025-12-12 04:35:03.229493965 +0000 UTC m=+94.105511152" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.272132 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.309418 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.309402749 podStartE2EDuration="1m12.309402749s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.271059873 +0000 UTC m=+94.147077030" watchObservedRunningTime="2025-12-12 04:35:03.309402749 +0000 UTC m=+94.185419896" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.380707 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h7crl" podStartSLOduration=71.380689925 podStartE2EDuration="1m11.380689925s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.380449797 +0000 UTC m=+94.256466944" watchObservedRunningTime="2025-12-12 04:35:03.380689925 +0000 UTC m=+94.256707072" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.381562 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bs8p8" podStartSLOduration=72.381554421 podStartE2EDuration="1m12.381554421s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.368707121 +0000 UTC m=+94.244724298" watchObservedRunningTime="2025-12-12 04:35:03.381554421 +0000 UTC m=+94.257571568" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.391962 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.391943016 podStartE2EDuration="4.391943016s" podCreationTimestamp="2025-12-12 04:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.391642347 +0000 UTC m=+94.267659494" watchObservedRunningTime="2025-12-12 04:35:03.391943016 +0000 UTC m=+94.267960163" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.410495 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.410537 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:03 crc kubenswrapper[4796]: E1212 04:35:03.410621 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:03 crc kubenswrapper[4796]: E1212 04:35:03.410792 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.971050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" event={"ID":"9c5eba0c-e775-4f72-8eb3-7e799fdf0590","Type":"ContainerStarted","Data":"ff0c643efa27374beed12a30a72078c18b260c73689b26d18cad421f57d6856d"} Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.971102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" event={"ID":"9c5eba0c-e775-4f72-8eb3-7e799fdf0590","Type":"ContainerStarted","Data":"da702b957f27a6e079f70cfe88b99246b54cb629b815817ba5e4c4cca21e2c1e"} Dec 12 04:35:03 crc kubenswrapper[4796]: I1212 04:35:03.986588 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-twqqt" podStartSLOduration=72.986573176 podStartE2EDuration="1m12.986573176s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:03.984773711 +0000 UTC m=+94.860790878" watchObservedRunningTime="2025-12-12 04:35:03.986573176 +0000 UTC m=+94.862590323" Dec 12 04:35:04 crc kubenswrapper[4796]: I1212 04:35:04.410717 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:04 crc kubenswrapper[4796]: I1212 04:35:04.410773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:04 crc kubenswrapper[4796]: E1212 04:35:04.411303 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:04 crc kubenswrapper[4796]: E1212 04:35:04.411489 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:04 crc kubenswrapper[4796]: I1212 04:35:04.411778 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:35:04 crc kubenswrapper[4796]: E1212 04:35:04.411956 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:35:05 crc kubenswrapper[4796]: I1212 04:35:05.410851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:05 crc kubenswrapper[4796]: I1212 04:35:05.410934 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:05 crc kubenswrapper[4796]: E1212 04:35:05.411050 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:05 crc kubenswrapper[4796]: E1212 04:35:05.411331 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:06 crc kubenswrapper[4796]: I1212 04:35:06.410788 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:06 crc kubenswrapper[4796]: E1212 04:35:06.411201 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:06 crc kubenswrapper[4796]: I1212 04:35:06.410883 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:06 crc kubenswrapper[4796]: E1212 04:35:06.411408 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:07 crc kubenswrapper[4796]: I1212 04:35:07.410878 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:07 crc kubenswrapper[4796]: E1212 04:35:07.411044 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:07 crc kubenswrapper[4796]: I1212 04:35:07.411366 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:07 crc kubenswrapper[4796]: E1212 04:35:07.411473 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:08 crc kubenswrapper[4796]: I1212 04:35:08.410684 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:08 crc kubenswrapper[4796]: I1212 04:35:08.410701 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:08 crc kubenswrapper[4796]: E1212 04:35:08.410900 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:08 crc kubenswrapper[4796]: E1212 04:35:08.411060 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:09 crc kubenswrapper[4796]: I1212 04:35:09.410706 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:09 crc kubenswrapper[4796]: E1212 04:35:09.413013 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:09 crc kubenswrapper[4796]: I1212 04:35:09.413110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:09 crc kubenswrapper[4796]: E1212 04:35:09.413216 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:10 crc kubenswrapper[4796]: I1212 04:35:10.199498 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:10 crc kubenswrapper[4796]: E1212 04:35:10.199628 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:35:10 crc kubenswrapper[4796]: E1212 04:35:10.199681 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs podName:a81191a1-393c-400c-9b7d-6748c4a8fb36 nodeName:}" failed. No retries permitted until 2025-12-12 04:36:14.199665302 +0000 UTC m=+165.075682459 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs") pod "network-metrics-daemon-ftpgk" (UID: "a81191a1-393c-400c-9b7d-6748c4a8fb36") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 04:35:10 crc kubenswrapper[4796]: I1212 04:35:10.410470 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:10 crc kubenswrapper[4796]: I1212 04:35:10.410789 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:10 crc kubenswrapper[4796]: E1212 04:35:10.411015 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:10 crc kubenswrapper[4796]: E1212 04:35:10.410868 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:11 crc kubenswrapper[4796]: I1212 04:35:11.411055 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:11 crc kubenswrapper[4796]: I1212 04:35:11.411163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:11 crc kubenswrapper[4796]: E1212 04:35:11.411324 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:11 crc kubenswrapper[4796]: E1212 04:35:11.411476 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:12 crc kubenswrapper[4796]: I1212 04:35:12.410899 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:12 crc kubenswrapper[4796]: I1212 04:35:12.410899 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:12 crc kubenswrapper[4796]: E1212 04:35:12.411102 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:12 crc kubenswrapper[4796]: E1212 04:35:12.411231 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:13 crc kubenswrapper[4796]: I1212 04:35:13.411308 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:13 crc kubenswrapper[4796]: I1212 04:35:13.411333 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:13 crc kubenswrapper[4796]: E1212 04:35:13.412323 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:13 crc kubenswrapper[4796]: E1212 04:35:13.412397 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:14 crc kubenswrapper[4796]: I1212 04:35:14.410898 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:14 crc kubenswrapper[4796]: I1212 04:35:14.410940 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:14 crc kubenswrapper[4796]: E1212 04:35:14.411247 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:14 crc kubenswrapper[4796]: E1212 04:35:14.411412 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:15 crc kubenswrapper[4796]: I1212 04:35:15.411009 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:15 crc kubenswrapper[4796]: I1212 04:35:15.411023 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:15 crc kubenswrapper[4796]: E1212 04:35:15.411267 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:15 crc kubenswrapper[4796]: E1212 04:35:15.411593 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:16 crc kubenswrapper[4796]: I1212 04:35:16.411050 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:16 crc kubenswrapper[4796]: I1212 04:35:16.411081 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:16 crc kubenswrapper[4796]: E1212 04:35:16.411463 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:16 crc kubenswrapper[4796]: E1212 04:35:16.411625 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:17 crc kubenswrapper[4796]: I1212 04:35:17.411369 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:17 crc kubenswrapper[4796]: I1212 04:35:17.411409 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:17 crc kubenswrapper[4796]: E1212 04:35:17.411585 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:17 crc kubenswrapper[4796]: E1212 04:35:17.411729 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:18 crc kubenswrapper[4796]: I1212 04:35:18.411197 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:18 crc kubenswrapper[4796]: I1212 04:35:18.411225 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:18 crc kubenswrapper[4796]: E1212 04:35:18.411438 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:18 crc kubenswrapper[4796]: E1212 04:35:18.411582 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:19 crc kubenswrapper[4796]: I1212 04:35:19.410634 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:19 crc kubenswrapper[4796]: I1212 04:35:19.410710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:19 crc kubenswrapper[4796]: E1212 04:35:19.413193 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:19 crc kubenswrapper[4796]: E1212 04:35:19.413595 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:19 crc kubenswrapper[4796]: I1212 04:35:19.414671 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:35:19 crc kubenswrapper[4796]: E1212 04:35:19.414882 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-996v7_openshift-ovn-kubernetes(439475ac-7f06-4a47-9a81-9f4cf4083c38)\"" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" Dec 12 04:35:20 crc kubenswrapper[4796]: I1212 04:35:20.410253 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:20 crc kubenswrapper[4796]: I1212 04:35:20.410274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:20 crc kubenswrapper[4796]: E1212 04:35:20.410456 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:20 crc kubenswrapper[4796]: E1212 04:35:20.410618 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:21 crc kubenswrapper[4796]: I1212 04:35:21.411124 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:21 crc kubenswrapper[4796]: I1212 04:35:21.411178 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:21 crc kubenswrapper[4796]: E1212 04:35:21.411347 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:21 crc kubenswrapper[4796]: E1212 04:35:21.411530 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:22 crc kubenswrapper[4796]: I1212 04:35:22.410484 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:22 crc kubenswrapper[4796]: I1212 04:35:22.411177 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:22 crc kubenswrapper[4796]: E1212 04:35:22.411597 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:22 crc kubenswrapper[4796]: E1212 04:35:22.411861 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:23 crc kubenswrapper[4796]: I1212 04:35:23.410984 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:23 crc kubenswrapper[4796]: I1212 04:35:23.410983 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:23 crc kubenswrapper[4796]: E1212 04:35:23.411318 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:23 crc kubenswrapper[4796]: E1212 04:35:23.411491 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:24 crc kubenswrapper[4796]: I1212 04:35:24.410563 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:24 crc kubenswrapper[4796]: I1212 04:35:24.410588 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:24 crc kubenswrapper[4796]: E1212 04:35:24.410806 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:24 crc kubenswrapper[4796]: E1212 04:35:24.410946 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.048536 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/1.log" Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.049481 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/0.log" Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.049570 4796 generic.go:334] "Generic (PLEG): container finished" podID="55b96fce-0e56-40cb-ab90-873a8421260b" containerID="503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd" exitCode=1 Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.049634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerDied","Data":"503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd"} Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.049729 4796 scope.go:117] "RemoveContainer" containerID="3e7c53eb5c93b6e5a59f1e0755c937165cd5b2389529024dd83ebf8fc9e18a98" Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.050419 4796 scope.go:117] "RemoveContainer" containerID="503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd" Dec 12 04:35:25 crc kubenswrapper[4796]: E1212 04:35:25.051440 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-b68x4_openshift-multus(55b96fce-0e56-40cb-ab90-873a8421260b)\"" pod="openshift-multus/multus-b68x4" podUID="55b96fce-0e56-40cb-ab90-873a8421260b" Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.410638 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:25 crc kubenswrapper[4796]: I1212 04:35:25.410685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:25 crc kubenswrapper[4796]: E1212 04:35:25.410885 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:25 crc kubenswrapper[4796]: E1212 04:35:25.410993 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:26 crc kubenswrapper[4796]: I1212 04:35:26.056263 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/1.log" Dec 12 04:35:26 crc kubenswrapper[4796]: I1212 04:35:26.411235 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:26 crc kubenswrapper[4796]: E1212 04:35:26.411401 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:26 crc kubenswrapper[4796]: I1212 04:35:26.411244 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:26 crc kubenswrapper[4796]: E1212 04:35:26.411675 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:27 crc kubenswrapper[4796]: I1212 04:35:27.410810 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:27 crc kubenswrapper[4796]: I1212 04:35:27.410850 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:27 crc kubenswrapper[4796]: E1212 04:35:27.411046 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:27 crc kubenswrapper[4796]: E1212 04:35:27.411181 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:28 crc kubenswrapper[4796]: I1212 04:35:28.410923 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:28 crc kubenswrapper[4796]: E1212 04:35:28.411094 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:28 crc kubenswrapper[4796]: I1212 04:35:28.410922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:28 crc kubenswrapper[4796]: E1212 04:35:28.411508 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:29 crc kubenswrapper[4796]: E1212 04:35:29.385862 4796 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 12 04:35:29 crc kubenswrapper[4796]: I1212 04:35:29.410523 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:29 crc kubenswrapper[4796]: E1212 04:35:29.413247 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:29 crc kubenswrapper[4796]: I1212 04:35:29.413387 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:29 crc kubenswrapper[4796]: E1212 04:35:29.413550 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:29 crc kubenswrapper[4796]: E1212 04:35:29.612118 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 04:35:30 crc kubenswrapper[4796]: I1212 04:35:30.411071 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:30 crc kubenswrapper[4796]: E1212 04:35:30.411258 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:30 crc kubenswrapper[4796]: I1212 04:35:30.411081 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:30 crc kubenswrapper[4796]: E1212 04:35:30.411651 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:31 crc kubenswrapper[4796]: I1212 04:35:31.411356 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:31 crc kubenswrapper[4796]: I1212 04:35:31.411408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:31 crc kubenswrapper[4796]: E1212 04:35:31.412198 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:31 crc kubenswrapper[4796]: E1212 04:35:31.412340 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:32 crc kubenswrapper[4796]: I1212 04:35:32.411152 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:32 crc kubenswrapper[4796]: I1212 04:35:32.411152 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:32 crc kubenswrapper[4796]: E1212 04:35:32.411271 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:32 crc kubenswrapper[4796]: E1212 04:35:32.411399 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:33 crc kubenswrapper[4796]: I1212 04:35:33.411391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:33 crc kubenswrapper[4796]: E1212 04:35:33.411544 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:33 crc kubenswrapper[4796]: I1212 04:35:33.411393 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:33 crc kubenswrapper[4796]: E1212 04:35:33.411757 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:34 crc kubenswrapper[4796]: I1212 04:35:34.410963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:34 crc kubenswrapper[4796]: I1212 04:35:34.411130 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:34 crc kubenswrapper[4796]: E1212 04:35:34.411275 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:34 crc kubenswrapper[4796]: E1212 04:35:34.411575 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:34 crc kubenswrapper[4796]: I1212 04:35:34.413437 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:35:34 crc kubenswrapper[4796]: E1212 04:35:34.613008 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.090053 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/3.log" Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.092570 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerStarted","Data":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.093130 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.126830 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podStartSLOduration=104.126816846 podStartE2EDuration="1m44.126816846s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:35.1259977 +0000 UTC m=+126.002014857" watchObservedRunningTime="2025-12-12 04:35:35.126816846 +0000 UTC m=+126.002833993" Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.313620 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ftpgk"] Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.313727 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:35 crc kubenswrapper[4796]: E1212 04:35:35.313807 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:35 crc kubenswrapper[4796]: I1212 04:35:35.411081 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:35 crc kubenswrapper[4796]: E1212 04:35:35.411519 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:36 crc kubenswrapper[4796]: I1212 04:35:36.410468 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:36 crc kubenswrapper[4796]: I1212 04:35:36.410503 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:36 crc kubenswrapper[4796]: E1212 04:35:36.410609 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:36 crc kubenswrapper[4796]: E1212 04:35:36.410724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:37 crc kubenswrapper[4796]: I1212 04:35:37.410815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:37 crc kubenswrapper[4796]: I1212 04:35:37.410856 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:37 crc kubenswrapper[4796]: E1212 04:35:37.411232 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:37 crc kubenswrapper[4796]: E1212 04:35:37.411481 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:37 crc kubenswrapper[4796]: I1212 04:35:37.411944 4796 scope.go:117] "RemoveContainer" containerID="503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd" Dec 12 04:35:38 crc kubenswrapper[4796]: I1212 04:35:38.105415 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/1.log" Dec 12 04:35:38 crc kubenswrapper[4796]: I1212 04:35:38.105517 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerStarted","Data":"414a02d2d1b8c6bef9995acc8d6d8a11fd7a85b8235740c990a18fd12c22fdb3"} Dec 12 04:35:38 crc kubenswrapper[4796]: I1212 04:35:38.411417 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:38 crc kubenswrapper[4796]: I1212 04:35:38.411447 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:38 crc kubenswrapper[4796]: E1212 04:35:38.412018 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 12 04:35:38 crc kubenswrapper[4796]: E1212 04:35:38.412205 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 12 04:35:39 crc kubenswrapper[4796]: I1212 04:35:39.410655 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:39 crc kubenswrapper[4796]: I1212 04:35:39.410798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:39 crc kubenswrapper[4796]: E1212 04:35:39.413478 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ftpgk" podUID="a81191a1-393c-400c-9b7d-6748c4a8fb36" Dec 12 04:35:39 crc kubenswrapper[4796]: E1212 04:35:39.413568 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 12 04:35:40 crc kubenswrapper[4796]: I1212 04:35:40.410583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:40 crc kubenswrapper[4796]: I1212 04:35:40.410682 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:40 crc kubenswrapper[4796]: I1212 04:35:40.412635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 04:35:40 crc kubenswrapper[4796]: I1212 04:35:40.413267 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 04:35:40 crc kubenswrapper[4796]: I1212 04:35:40.413433 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 04:35:40 crc kubenswrapper[4796]: I1212 04:35:40.414555 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 04:35:41 crc kubenswrapper[4796]: I1212 04:35:41.410704 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:41 crc kubenswrapper[4796]: I1212 04:35:41.410750 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:35:41 crc kubenswrapper[4796]: I1212 04:35:41.416447 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 04:35:41 crc kubenswrapper[4796]: I1212 04:35:41.418395 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.821933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.884538 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-45hnd"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.885085 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.886995 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zqsxz"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.887542 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.888640 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4mwh"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.889253 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.899045 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.899808 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.906177 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.906609 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.907596 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.917689 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.920066 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.920911 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.921780 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.924667 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4tvxf"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.925055 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.925445 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.925644 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 12 04:35:43 crc kubenswrapper[4796]: W1212 04:35:43.925798 4796 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 12 04:35:43 crc kubenswrapper[4796]: E1212 04:35:43.925829 4796 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 04:35:43 crc kubenswrapper[4796]: W1212 04:35:43.925876 4796 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 12 04:35:43 crc kubenswrapper[4796]: E1212 04:35:43.925891 4796 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 04:35:43 crc kubenswrapper[4796]: W1212 04:35:43.925931 4796 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 12 04:35:43 crc kubenswrapper[4796]: E1212 04:35:43.925943 4796 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 04:35:43 crc kubenswrapper[4796]: W1212 04:35:43.925982 4796 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Dec 12 04:35:43 crc kubenswrapper[4796]: E1212 04:35:43.925994 4796 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.926042 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.926196 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.926399 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.926562 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.928381 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.928566 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.928974 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.929117 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.929314 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.929513 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.929753 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.929891 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.930685 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.931213 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.931590 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.932817 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.934400 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.934780 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.935087 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.935329 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.936238 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.936867 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.937466 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.957433 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.957628 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4rc6c"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.958034 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.958419 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.958431 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.958453 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.959664 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.959966 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.963198 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.963618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.966151 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gnxjc"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.966609 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.968109 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l56xp"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.970907 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-djxgm"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.971355 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4tb6c"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.971614 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-45hnd"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.971687 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.972174 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.972415 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.972738 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.973138 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.973237 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.973997 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.974401 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.974496 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.974823 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.980028 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-etcd-client\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.980200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5lrm\" (UniqueName: \"kubernetes.io/projected/12948a9c-fd3a-429c-bb98-e3d449208beb-kube-api-access-t5lrm\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.980319 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-serving-cert\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.980436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-config\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.980513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5jt\" (UniqueName: \"kubernetes.io/projected/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-kube-api-access-cz5jt\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.986750 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.987013 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7vd5t"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.987262 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwr2j"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.987600 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988109 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q627j"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988158 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988300 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988533 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988942 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.988964 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.989712 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-px76m"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.990117 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.990809 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991214 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991626 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-audit-policies\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991692 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991703 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-etcd-client\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-config\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991894 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-auth-proxy-config\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.991971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-serving-cert\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992036 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-node-pullsecrets\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-audit-dir\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992223 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992320 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-etcd-serving-ca\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992396 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-encryption-config\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992463 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-audit\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992582 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-trusted-ca-bundle\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992663 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-client-ca\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992748 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-config\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992659 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57"] Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993567 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.992819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtxh\" (UniqueName: \"kubernetes.io/projected/4982303d-d471-4a21-a85f-4fd2ce6d3481-kube-api-access-fmtxh\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tb6\" (UniqueName: \"kubernetes.io/projected/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-kube-api-access-57tb6\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-oauth-config\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-config\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4982303d-d471-4a21-a85f-4fd2ce6d3481-serving-cert\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993904 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-encryption-config\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-service-ca\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993945 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grkjj\" (UniqueName: \"kubernetes.io/projected/00bcefc1-0041-4c8e-836f-f1abaa3eb344-kube-api-access-grkjj\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993964 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-service-ca-bundle\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.993990 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12948a9c-fd3a-429c-bb98-e3d449208beb-audit-dir\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-serving-cert\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e05fbfb-ba4c-465c-94a2-49f666f39c02-config\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e05fbfb-ba4c-465c-94a2-49f666f39c02-images\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994072 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994110 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e05fbfb-ba4c-465c-94a2-49f666f39c02-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-config\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-machine-approver-tls\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rhq\" (UniqueName: \"kubernetes.io/projected/8e05fbfb-ba4c-465c-94a2-49f666f39c02-kube-api-access-29rhq\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr24z\" (UniqueName: \"kubernetes.io/projected/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-kube-api-access-mr24z\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994217 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqg47\" (UniqueName: \"kubernetes.io/projected/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-kube-api-access-rqg47\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994235 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-image-import-ca\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994307 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-serving-cert\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994328 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-oauth-serving-cert\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.994385 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:43 crc kubenswrapper[4796]: I1212 04:35:43.998824 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.004126 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-84sk9"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.004680 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.017638 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.017737 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.027923 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9d9v"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.028514 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.028731 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.093479 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095781 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/82432f4b-5d7d-4b20-9cc7-daacc71964d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-audit-policies\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095818 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-etcd-client\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-dir\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095850 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095867 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-config\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-auth-proxy-config\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgswk\" (UniqueName: \"kubernetes.io/projected/8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e-kube-api-access-jgswk\") pod \"downloads-7954f5f757-l56xp\" (UID: \"8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e\") " pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55aa08f3-dce6-4268-a735-fd3a5e10fd77-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-serving-cert\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-node-pullsecrets\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-audit-dir\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.095984 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096000 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-audit\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-etcd-serving-ca\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-encryption-config\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-trusted-ca-bundle\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxbv\" (UniqueName: \"kubernetes.io/projected/55aa08f3-dce6-4268-a735-fd3a5e10fd77-kube-api-access-rjxbv\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096082 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-client-ca\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096096 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-config\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtxh\" (UniqueName: \"kubernetes.io/projected/4982303d-d471-4a21-a85f-4fd2ce6d3481-kube-api-access-fmtxh\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096127 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c55f34-076c-445a-bd67-836624d9a968-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096141 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/32b6def7-556e-45e2-ae14-84211e7da580-signing-cabundle\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096164 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57tb6\" (UniqueName: \"kubernetes.io/projected/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-kube-api-access-57tb6\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096178 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-oauth-config\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096194 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-config\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4982303d-d471-4a21-a85f-4fd2ce6d3481-serving-cert\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096238 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-service-ca-bundle\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-encryption-config\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-service-ca\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grkjj\" (UniqueName: \"kubernetes.io/projected/00bcefc1-0041-4c8e-836f-f1abaa3eb344-kube-api-access-grkjj\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096329 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xg6\" (UniqueName: \"kubernetes.io/projected/32b6def7-556e-45e2-ae14-84211e7da580-kube-api-access-h4xg6\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096348 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9076f897-29ef-41d6-9cb0-d89f24362c0b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096364 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-images\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-config\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12948a9c-fd3a-429c-bb98-e3d449208beb-audit-dir\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096411 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096440 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd42823a-83a6-4a22-bec9-8cd20753bdb1-serving-cert\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096455 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096470 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096486 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-serving-cert\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e05fbfb-ba4c-465c-94a2-49f666f39c02-config\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096516 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55aa08f3-dce6-4268-a735-fd3a5e10fd77-metrics-tls\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096532 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e05fbfb-ba4c-465c-94a2-49f666f39c02-images\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096549 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6aa5275c-32ff-433c-bdbb-0e4c152224b8-trusted-ca\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096580 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/32b6def7-556e-45e2-ae14-84211e7da580-signing-key\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096597 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbjzs\" (UniqueName: \"kubernetes.io/projected/82432f4b-5d7d-4b20-9cc7-daacc71964d2-kube-api-access-wbjzs\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096613 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-proxy-tls\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096640 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9076f897-29ef-41d6-9cb0-d89f24362c0b-config\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096663 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/83c81be8-f61a-4bb0-b4b2-26dd509a7d9e-metrics-tls\") pod \"dns-operator-744455d44c-q627j\" (UID: \"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096677 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-config\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e05fbfb-ba4c-465c-94a2-49f666f39c02-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096725 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa5275c-32ff-433c-bdbb-0e4c152224b8-serving-cert\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-machine-approver-tls\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096758 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-config\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096774 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096791 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29rhq\" (UniqueName: \"kubernetes.io/projected/8e05fbfb-ba4c-465c-94a2-49f666f39c02-kube-api-access-29rhq\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-policies\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr24z\" (UniqueName: \"kubernetes.io/projected/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-kube-api-access-mr24z\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096851 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqg47\" (UniqueName: \"kubernetes.io/projected/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-kube-api-access-rqg47\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.096880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82432f4b-5d7d-4b20-9cc7-daacc71964d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.097725 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-audit-policies\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.097964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e05fbfb-ba4c-465c-94a2-49f666f39c02-config\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.097981 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098221 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999fc\" (UniqueName: \"kubernetes.io/projected/83c81be8-f61a-4bb0-b4b2-26dd509a7d9e-kube-api-access-999fc\") pod \"dns-operator-744455d44c-q627j\" (UID: \"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5v6p\" (UniqueName: \"kubernetes.io/projected/cd42823a-83a6-4a22-bec9-8cd20753bdb1-kube-api-access-h5v6p\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-image-import-ca\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098297 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9076f897-29ef-41d6-9cb0-d89f24362c0b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-client-ca\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-serving-cert\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098347 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-oauth-serving-cert\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznzl\" (UniqueName: \"kubernetes.io/projected/6aa5275c-32ff-433c-bdbb-0e4c152224b8-kube-api-access-lznzl\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098628 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-audit\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.098666 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e05fbfb-ba4c-465c-94a2-49f666f39c02-images\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.099266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-etcd-serving-ca\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.099993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-config\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.100106 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12948a9c-fd3a-429c-bb98-e3d449208beb-audit-dir\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.103954 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-config\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.104339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-node-pullsecrets\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.104391 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-audit-dir\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105220 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-client-ca\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-etcd-client\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105373 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v42n\" (UniqueName: \"kubernetes.io/projected/59c55f34-076c-445a-bd67-836624d9a968-kube-api-access-9v42n\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55aa08f3-dce6-4268-a735-fd3a5e10fd77-trusted-ca\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5lrm\" (UniqueName: \"kubernetes.io/projected/12948a9c-fd3a-429c-bb98-e3d449208beb-kube-api-access-t5lrm\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105448 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-serving-cert\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105471 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.105495 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c55f34-076c-445a-bd67-836624d9a968-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.106021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-service-ca\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.106518 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257lx\" (UniqueName: \"kubernetes.io/projected/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-kube-api-access-257lx\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.106584 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.106665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-config\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.106723 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12948a9c-fd3a-429c-bb98-e3d449208beb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.107358 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.110321 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.110428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-image-import-ca\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.110836 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111013 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111147 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111257 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111543 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111641 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111800 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.111899 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.113093 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-config\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.113736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5jt\" (UniqueName: \"kubernetes.io/projected/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-kube-api-access-cz5jt\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.113860 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.113944 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vvx\" (UniqueName: \"kubernetes.io/projected/ada430eb-6dc8-4516-87df-5dbdc97b5563-kube-api-access-p8vvx\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.114020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa5275c-32ff-433c-bdbb-0e4c152224b8-config\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.114749 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-encryption-config\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.118095 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-encryption-config\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.118885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-serving-cert\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.119086 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.119183 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.120000 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.120192 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.120454 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.120623 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.120780 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.121274 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.121470 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.121643 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.121811 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.121997 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.122198 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.122443 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.122668 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.122882 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.123055 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.123630 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.124013 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.124593 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.124886 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.124924 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-auth-proxy-config\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125244 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-oauth-config\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125616 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-config\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125922 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4mwh"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.190920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-trusted-ca-bundle\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.190956 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.190993 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.152869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-serving-cert\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153298 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-oauth-serving-cert\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.172773 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-machine-approver-tls\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.187920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-etcd-client\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.188183 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4982303d-d471-4a21-a85f-4fd2ce6d3481-serving-cert\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.189514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-serving-cert\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.189878 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-serving-cert\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.189909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/12948a9c-fd3a-429c-bb98-e3d449208beb-etcd-client\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.190101 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e05fbfb-ba4c-465c-94a2-49f666f39c02-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.136122 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-service-ca-bundle\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.126785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-config\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125666 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125674 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125682 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125733 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125788 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125825 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125899 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125904 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125943 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125970 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.125973 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.126012 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.152500 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153476 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153504 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153519 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153531 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153554 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153680 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153727 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.153753 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.154041 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.154093 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.154985 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.156295 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.156327 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.156353 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.174294 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.174963 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.176027 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.176948 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.177001 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.177160 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.177372 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.177427 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.186131 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.188775 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.189755 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.189917 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.190901 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.200536 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4982303d-d471-4a21-a85f-4fd2ce6d3481-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.203428 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.204256 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.204645 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gnxjc"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.204741 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.204812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zqsxz"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.204881 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.204951 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.205445 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.205707 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.205930 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.206171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.206807 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.207727 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.209497 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.210041 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xwh8h"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.210117 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.210855 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4tvxf"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.210914 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.211068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.211474 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.211763 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7vd5t"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.215683 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-policies\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5v6p\" (UniqueName: \"kubernetes.io/projected/cd42823a-83a6-4a22-bec9-8cd20753bdb1-kube-api-access-h5v6p\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216140 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82432f4b-5d7d-4b20-9cc7-daacc71964d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216165 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999fc\" (UniqueName: \"kubernetes.io/projected/83c81be8-f61a-4bb0-b4b2-26dd509a7d9e-kube-api-access-999fc\") pod \"dns-operator-744455d44c-q627j\" (UID: \"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216190 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9076f897-29ef-41d6-9cb0-d89f24362c0b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-client-ca\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216233 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznzl\" (UniqueName: \"kubernetes.io/projected/6aa5275c-32ff-433c-bdbb-0e4c152224b8-kube-api-access-lznzl\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216293 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55aa08f3-dce6-4268-a735-fd3a5e10fd77-trusted-ca\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v42n\" (UniqueName: \"kubernetes.io/projected/59c55f34-076c-445a-bd67-836624d9a968-kube-api-access-9v42n\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c55f34-076c-445a-bd67-836624d9a968-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216391 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216415 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257lx\" (UniqueName: \"kubernetes.io/projected/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-kube-api-access-257lx\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216472 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vvx\" (UniqueName: \"kubernetes.io/projected/ada430eb-6dc8-4516-87df-5dbdc97b5563-kube-api-access-p8vvx\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa5275c-32ff-433c-bdbb-0e4c152224b8-config\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/82432f4b-5d7d-4b20-9cc7-daacc71964d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216570 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-dir\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216593 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216615 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgswk\" (UniqueName: \"kubernetes.io/projected/8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e-kube-api-access-jgswk\") pod \"downloads-7954f5f757-l56xp\" (UID: \"8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e\") " pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216635 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55aa08f3-dce6-4268-a735-fd3a5e10fd77-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216655 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216687 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjxbv\" (UniqueName: \"kubernetes.io/projected/55aa08f3-dce6-4268-a735-fd3a5e10fd77-kube-api-access-rjxbv\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c55f34-076c-445a-bd67-836624d9a968-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/32b6def7-556e-45e2-ae14-84211e7da580-signing-cabundle\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216809 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xg6\" (UniqueName: \"kubernetes.io/projected/32b6def7-556e-45e2-ae14-84211e7da580-kube-api-access-h4xg6\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216832 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9076f897-29ef-41d6-9cb0-d89f24362c0b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-images\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216874 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216912 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-config\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216929 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd42823a-83a6-4a22-bec9-8cd20753bdb1-serving-cert\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216948 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216989 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.216988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55aa08f3-dce6-4268-a735-fd3a5e10fd77-metrics-tls\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6aa5275c-32ff-433c-bdbb-0e4c152224b8-trusted-ca\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbjzs\" (UniqueName: \"kubernetes.io/projected/82432f4b-5d7d-4b20-9cc7-daacc71964d2-kube-api-access-wbjzs\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/32b6def7-556e-45e2-ae14-84211e7da580-signing-key\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217178 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/83c81be8-f61a-4bb0-b4b2-26dd509a7d9e-metrics-tls\") pod \"dns-operator-744455d44c-q627j\" (UID: \"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-config\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217221 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-proxy-tls\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9076f897-29ef-41d6-9cb0-d89f24362c0b-config\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217265 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.217373 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-dir\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.218471 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa5275c-32ff-433c-bdbb-0e4c152224b8-serving-cert\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.218510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.219421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6aa5275c-32ff-433c-bdbb-0e4c152224b8-trusted-ca\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.220094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.220405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82432f4b-5d7d-4b20-9cc7-daacc71964d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.220791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.220902 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-policies\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.221139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-client-ca\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.221999 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.222028 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-djxgm"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.222623 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.223071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.225434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.229182 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c55f34-076c-445a-bd67-836624d9a968-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.231032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-config\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.231223 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l56xp"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.232111 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.236551 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jfpqc"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.232569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aa5275c-32ff-433c-bdbb-0e4c152224b8-serving-cert\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.233519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.233958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/82432f4b-5d7d-4b20-9cc7-daacc71964d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.234265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa5275c-32ff-433c-bdbb-0e4c152224b8-config\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.237121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.234406 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c55f34-076c-445a-bd67-836624d9a968-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.234605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.237363 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.233193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.237807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.239650 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.239759 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.239788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd42823a-83a6-4a22-bec9-8cd20753bdb1-serving-cert\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.239913 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.241502 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4rc6c"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.241984 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rk7sn"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.245866 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.247566 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.249490 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwr2j"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.249608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.257757 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.261379 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.263529 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2mk57"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.264202 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.266673 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.266823 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.268092 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4tb6c"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.270644 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.278185 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9d9v"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.279381 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.280967 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.284517 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-84sk9"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.286601 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.291765 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.294623 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.302687 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.314640 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.314729 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.315876 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q627j"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.316761 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.317800 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.319390 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xwh8h"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.319700 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.320726 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.322247 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.323863 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jfpqc"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.325103 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-scqpl"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.325813 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.326175 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2mk57"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.326411 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.327208 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-scqpl"] Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.346943 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.353468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9076f897-29ef-41d6-9cb0-d89f24362c0b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.371786 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.381314 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9076f897-29ef-41d6-9cb0-d89f24362c0b-config\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.387109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.407383 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.427713 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.446935 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.467010 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.487705 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.501100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55aa08f3-dce6-4268-a735-fd3a5e10fd77-metrics-tls\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.514129 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.517593 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55aa08f3-dce6-4268-a735-fd3a5e10fd77-trusted-ca\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.527093 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.546721 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.567607 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.587243 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.593346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/83c81be8-f61a-4bb0-b4b2-26dd509a7d9e-metrics-tls\") pod \"dns-operator-744455d44c-q627j\" (UID: \"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.607150 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.627447 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.647357 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.667849 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.687663 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.708314 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.727413 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.747451 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.767856 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.787983 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.807506 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.817347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.827983 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.847239 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.852017 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-config\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.868519 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.874497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-images\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.887754 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.908549 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.914754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-proxy-tls\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.928235 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.948206 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.967820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 12 04:35:44 crc kubenswrapper[4796]: I1212 04:35:44.988126 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.005806 4796 request.go:700] Waited for 1.011770857s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.007844 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.028374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.048723 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.068552 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.088500 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 04:35:45 crc kubenswrapper[4796]: E1212 04:35:45.099013 4796 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 04:35:45 crc kubenswrapper[4796]: E1212 04:35:45.099118 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-samples-operator-tls podName:b1c0c8e3-e1de-4ce9-99e3-c048e499f10d nodeName:}" failed. No retries permitted until 2025-12-12 04:35:45.599088497 +0000 UTC m=+136.475105684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-t7rhc" (UID: "b1c0c8e3-e1de-4ce9-99e3-c048e499f10d") : failed to sync secret cache: timed out waiting for the condition Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.128614 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.131529 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.147809 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.169144 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.175655 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/32b6def7-556e-45e2-ae14-84211e7da580-signing-key\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.188222 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.208453 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.228384 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.232225 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/32b6def7-556e-45e2-ae14-84211e7da580-signing-cabundle\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.247370 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.267511 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.332597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr24z\" (UniqueName: \"kubernetes.io/projected/1e8bbd2f-76b0-4f93-96f7-96f4c152838b-kube-api-access-mr24z\") pod \"machine-approver-56656f9798-dgpwf\" (UID: \"1e8bbd2f-76b0-4f93-96f7-96f4c152838b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.355614 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqg47\" (UniqueName: \"kubernetes.io/projected/b6ea1627-29bb-4e98-8ed3-10fe828c7b80-kube-api-access-rqg47\") pod \"apiserver-76f77b778f-zqsxz\" (UID: \"b6ea1627-29bb-4e98-8ed3-10fe828c7b80\") " pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.364452 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grkjj\" (UniqueName: \"kubernetes.io/projected/00bcefc1-0041-4c8e-836f-f1abaa3eb344-kube-api-access-grkjj\") pod \"console-f9d7485db-4tvxf\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.394012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29rhq\" (UniqueName: \"kubernetes.io/projected/8e05fbfb-ba4c-465c-94a2-49f666f39c02-kube-api-access-29rhq\") pod \"machine-api-operator-5694c8668f-45hnd\" (UID: \"8e05fbfb-ba4c-465c-94a2-49f666f39c02\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.411671 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.419365 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtxh\" (UniqueName: \"kubernetes.io/projected/4982303d-d471-4a21-a85f-4fd2ce6d3481-kube-api-access-fmtxh\") pod \"authentication-operator-69f744f599-4rc6c\" (UID: \"4982303d-d471-4a21-a85f-4fd2ce6d3481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.429387 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.434489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5lrm\" (UniqueName: \"kubernetes.io/projected/12948a9c-fd3a-429c-bb98-e3d449208beb-kube-api-access-t5lrm\") pod \"apiserver-7bbb656c7d-d9d4m\" (UID: \"12948a9c-fd3a-429c-bb98-e3d449208beb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.443390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57tb6\" (UniqueName: \"kubernetes.io/projected/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-kube-api-access-57tb6\") pod \"controller-manager-879f6c89f-t4mwh\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.473118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.488412 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.488450 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.508900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.529868 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.547053 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.547640 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.560370 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.567620 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.569318 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.589672 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.607839 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.628984 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.645133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.649259 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.669735 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.689550 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.707330 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.712000 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-45hnd"] Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.730159 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.738718 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zqsxz"] Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.747471 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: W1212 04:35:45.767017 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ea1627_29bb_4e98_8ed3_10fe828c7b80.slice/crio-330077bf95cfca4057d194c715bb4ad304b647ff20cc739b9d0e6245cb86c039 WatchSource:0}: Error finding container 330077bf95cfca4057d194c715bb4ad304b647ff20cc739b9d0e6245cb86c039: Status 404 returned error can't find the container with id 330077bf95cfca4057d194c715bb4ad304b647ff20cc739b9d0e6245cb86c039 Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.768439 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.768621 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m"] Dec 12 04:35:45 crc kubenswrapper[4796]: W1212 04:35:45.781433 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12948a9c_fd3a_429c_bb98_e3d449208beb.slice/crio-65f336f6ea7c948f90caf19727cf49c4059d0f796eaaf287527ed66858ea7892 WatchSource:0}: Error finding container 65f336f6ea7c948f90caf19727cf49c4059d0f796eaaf287527ed66858ea7892: Status 404 returned error can't find the container with id 65f336f6ea7c948f90caf19727cf49c4059d0f796eaaf287527ed66858ea7892 Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.787317 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.799494 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4mwh"] Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.807421 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.828142 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 12 04:35:45 crc kubenswrapper[4796]: W1212 04:35:45.834928 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa0261aa_ad16_4189_aaf8_6aacb68d1f1c.slice/crio-c3c32755979083f2e3da1882daba1e51a1d91fa4d7952b71615dbc9e550f8cfe WatchSource:0}: Error finding container c3c32755979083f2e3da1882daba1e51a1d91fa4d7952b71615dbc9e550f8cfe: Status 404 returned error can't find the container with id c3c32755979083f2e3da1882daba1e51a1d91fa4d7952b71615dbc9e550f8cfe Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.862686 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5v6p\" (UniqueName: \"kubernetes.io/projected/cd42823a-83a6-4a22-bec9-8cd20753bdb1-kube-api-access-h5v6p\") pod \"route-controller-manager-6576b87f9c-4722c\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.879473 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.880479 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbjzs\" (UniqueName: \"kubernetes.io/projected/82432f4b-5d7d-4b20-9cc7-daacc71964d2-kube-api-access-wbjzs\") pod \"openshift-config-operator-7777fb866f-djxgm\" (UID: \"82432f4b-5d7d-4b20-9cc7-daacc71964d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.899186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999fc\" (UniqueName: \"kubernetes.io/projected/83c81be8-f61a-4bb0-b4b2-26dd509a7d9e-kube-api-access-999fc\") pod \"dns-operator-744455d44c-q627j\" (UID: \"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.918809 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xg6\" (UniqueName: \"kubernetes.io/projected/32b6def7-556e-45e2-ae14-84211e7da580-kube-api-access-h4xg6\") pod \"service-ca-9c57cc56f-n9d9v\" (UID: \"32b6def7-556e-45e2-ae14-84211e7da580\") " pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.940613 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgswk\" (UniqueName: \"kubernetes.io/projected/8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e-kube-api-access-jgswk\") pod \"downloads-7954f5f757-l56xp\" (UID: \"8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e\") " pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.968430 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55aa08f3-dce6-4268-a735-fd3a5e10fd77-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.985378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjxbv\" (UniqueName: \"kubernetes.io/projected/55aa08f3-dce6-4268-a735-fd3a5e10fd77-kube-api-access-rjxbv\") pod \"ingress-operator-5b745b69d9-qngxr\" (UID: \"55aa08f3-dce6-4268-a735-fd3a5e10fd77\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:45 crc kubenswrapper[4796]: I1212 04:35:45.992683 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.004068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9076f897-29ef-41d6-9cb0-d89f24362c0b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m5vc2\" (UID: \"9076f897-29ef-41d6-9cb0-d89f24362c0b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.006067 4796 request.go:700] Waited for 1.779330358s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.006214 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.026699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznzl\" (UniqueName: \"kubernetes.io/projected/6aa5275c-32ff-433c-bdbb-0e4c152224b8-kube-api-access-lznzl\") pod \"console-operator-58897d9998-gnxjc\" (UID: \"6aa5275c-32ff-433c-bdbb-0e4c152224b8\") " pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.042394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v42n\" (UniqueName: \"kubernetes.io/projected/59c55f34-076c-445a-bd67-836624d9a968-kube-api-access-9v42n\") pod \"kube-storage-version-migrator-operator-b67b599dd-72sgg\" (UID: \"59c55f34-076c-445a-bd67-836624d9a968\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.065313 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bd8f2dd-9f3f-4f41-a4e6-423b159176ca-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tdlpn\" (UID: \"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.070570 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.075171 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4rc6c"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.079469 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q627j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.084425 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4tvxf"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.091887 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257lx\" (UniqueName: \"kubernetes.io/projected/aa42ca16-b6a5-4bdb-a0de-c0887b77bf61-kube-api-access-257lx\") pod \"machine-config-operator-74547568cd-ptlwf\" (UID: \"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.095299 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.106562 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.111000 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vvx\" (UniqueName: \"kubernetes.io/projected/ada430eb-6dc8-4516-87df-5dbdc97b5563-kube-api-access-p8vvx\") pod \"oauth-openshift-558db77b4-4tb6c\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.117443 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.120934 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.124416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.130896 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.145394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.148111 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.169350 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.204111 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.206366 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.208551 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.231367 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.240601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" event={"ID":"8e05fbfb-ba4c-465c-94a2-49f666f39c02","Type":"ContainerStarted","Data":"aac2ce3854256637004b7b03a91b4a0b0aa2c9ec9bd5f567a1667d09e3603cc6"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.240645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" event={"ID":"8e05fbfb-ba4c-465c-94a2-49f666f39c02","Type":"ContainerStarted","Data":"06d99b95349c7c4be64ab67da93742f750be855ef11d13512ea0f1c8c93f9b03"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.246125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" event={"ID":"12948a9c-fd3a-429c-bb98-e3d449208beb","Type":"ContainerStarted","Data":"65f336f6ea7c948f90caf19727cf49c4059d0f796eaaf287527ed66858ea7892"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.247085 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.266834 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.270552 4796 generic.go:334] "Generic (PLEG): container finished" podID="b6ea1627-29bb-4e98-8ed3-10fe828c7b80" containerID="d942b38d42d9ef3b43b81fd61c49c4f998cde5995639a99e64bc3fac5e7d2be0" exitCode=0 Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.270650 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" event={"ID":"b6ea1627-29bb-4e98-8ed3-10fe828c7b80","Type":"ContainerDied","Data":"d942b38d42d9ef3b43b81fd61c49c4f998cde5995639a99e64bc3fac5e7d2be0"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.270684 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" event={"ID":"b6ea1627-29bb-4e98-8ed3-10fe828c7b80","Type":"ContainerStarted","Data":"330077bf95cfca4057d194c715bb4ad304b647ff20cc739b9d0e6245cb86c039"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.285653 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.286686 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.306920 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.311716 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" event={"ID":"1e8bbd2f-76b0-4f93-96f7-96f4c152838b","Type":"ContainerStarted","Data":"93d43c25e4b3829b96ce9456d815ff4653c190bd91f015df7700fc968d079fd8"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.311757 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" event={"ID":"1e8bbd2f-76b0-4f93-96f7-96f4c152838b","Type":"ContainerStarted","Data":"9577e67890b624b947c678323221ac8178264e8c116f2748b49ee83980af6983"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.326974 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.337442 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.344673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" event={"ID":"4982303d-d471-4a21-a85f-4fd2ce6d3481","Type":"ContainerStarted","Data":"051493c48ad0903b53c44c056738a1a40bfc039b4bda6117f5b667167d216e63"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.345937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" event={"ID":"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c","Type":"ContainerStarted","Data":"8cdb7f6f8f311e88187a767e3d8df8dd438551736c52aa30316f6ac1c335bb8a"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.345968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" event={"ID":"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c","Type":"ContainerStarted","Data":"c3c32755979083f2e3da1882daba1e51a1d91fa4d7952b71615dbc9e550f8cfe"} Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.346860 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.348099 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.357060 4796 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-t4mwh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.357104 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" podUID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.357567 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-djxgm"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.367411 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 12 04:35:46 crc kubenswrapper[4796]: W1212 04:35:46.389107 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82432f4b_5d7d_4b20_9cc7_daacc71964d2.slice/crio-ba45db1e77c6fc4c4f8026fa7913793409a4509f1c4483231629da6da1263316 WatchSource:0}: Error finding container ba45db1e77c6fc4c4f8026fa7913793409a4509f1c4483231629da6da1263316: Status 404 returned error can't find the container with id ba45db1e77c6fc4c4f8026fa7913793409a4509f1c4483231629da6da1263316 Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.389319 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.411800 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.423791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5jt\" (UniqueName: \"kubernetes.io/projected/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-kube-api-access-cz5jt\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.429650 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.449312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c0c8e3-e1de-4ce9-99e3-c048e499f10d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t7rhc\" (UID: \"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465687 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51531dfd-5912-48fc-9648-b87a47679e7d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdwml\" (UniqueName: \"kubernetes.io/projected/6a87db66-197c-4eda-83d1-984d3b0957e8-kube-api-access-gdwml\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a87db66-197c-4eda-83d1-984d3b0957e8-proxy-tls\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465790 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/035ca985-1000-4c28-aece-3c46abf07371-service-ca-bundle\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465814 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31c848d8-f580-49c9-b556-6ef0ec189a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-default-certificate\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465857 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp8ms\" (UniqueName: \"kubernetes.io/projected/9e2dad3c-1039-41f9-9df2-633a0d146b52-kube-api-access-hp8ms\") pod \"migrator-59844c95c7-6774s\" (UID: \"9e2dad3c-1039-41f9-9df2-633a0d146b52\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2lv\" (UniqueName: \"kubernetes.io/projected/51531dfd-5912-48fc-9648-b87a47679e7d-kube-api-access-vz2lv\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-stats-auth\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a87db66-197c-4eda-83d1-984d3b0957e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.465987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n54p\" (UniqueName: \"kubernetes.io/projected/31c848d8-f580-49c9-b556-6ef0ec189a51-kube-api-access-6n54p\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466004 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-serving-cert\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-certificates\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466055 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-srv-cert\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466070 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-ca\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466092 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/31c848d8-f580-49c9-b556-6ef0ec189a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59flv\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-kube-api-access-59flv\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466122 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-bound-sa-token\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466144 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-metrics-certs\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466175 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466189 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-client\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466213 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrzn\" (UniqueName: \"kubernetes.io/projected/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-kube-api-access-pnrzn\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466229 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-service-ca\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466272 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-config\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-tls\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466562 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31c848d8-f580-49c9-b556-6ef0ec189a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466585 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfcq\" (UniqueName: \"kubernetes.io/projected/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-kube-api-access-wxfcq\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfxt\" (UniqueName: \"kubernetes.io/projected/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-kube-api-access-8gfxt\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466638 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlw8\" (UniqueName: \"kubernetes.io/projected/921d55d1-a229-423a-a84f-c727ecd214a4-kube-api-access-rtlw8\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466695 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqlz\" (UniqueName: \"kubernetes.io/projected/035ca985-1000-4c28-aece-3c46abf07371-kube-api-access-2pqlz\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466781 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-trusted-ca\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466806 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51531dfd-5912-48fc-9648-b87a47679e7d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.466856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.468223 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:46.968205907 +0000 UTC m=+137.844223144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.486560 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr"] Dec 12 04:35:46 crc kubenswrapper[4796]: W1212 04:35:46.556021 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55aa08f3_dce6_4268_a735_fd3a5e10fd77.slice/crio-954252b74338fc3e7404561774930046dbdc6e228eabf3b086dc57fb611c9bed WatchSource:0}: Error finding container 954252b74338fc3e7404561774930046dbdc6e228eabf3b086dc57fb611c9bed: Status 404 returned error can't find the container with id 954252b74338fc3e7404561774930046dbdc6e228eabf3b086dc57fb611c9bed Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.567862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2lv\" (UniqueName: \"kubernetes.io/projected/51531dfd-5912-48fc-9648-b87a47679e7d-kube-api-access-vz2lv\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568091 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-stats-auth\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a87db66-197c-4eda-83d1-984d3b0957e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rxj\" (UniqueName: \"kubernetes.io/projected/fe55b40c-eb22-441f-ac88-98cf1199f515-kube-api-access-99rxj\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568157 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb54n\" (UniqueName: \"kubernetes.io/projected/55716cbf-9383-4bb0-806a-09ae626c5e9f-kube-api-access-wb54n\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568173 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/853f490f-cc09-40aa-bde6-6f3f02e63098-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n54p\" (UniqueName: \"kubernetes.io/projected/31c848d8-f580-49c9-b556-6ef0ec189a51-kube-api-access-6n54p\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568241 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-serving-cert\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.568269 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.068250433 +0000 UTC m=+137.944267570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-certificates\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6adaae06-a6aa-4040-9a14-5490cd58b1d9-secret-volume\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55716cbf-9383-4bb0-806a-09ae626c5e9f-webhook-cert\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568434 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55716cbf-9383-4bb0-806a-09ae626c5e9f-apiservice-cert\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-srv-cert\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-registration-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568489 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkccw\" (UniqueName: \"kubernetes.io/projected/3082d4f7-cc74-4257-9ed3-75c159fe22c1-kube-api-access-hkccw\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568519 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-ca\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568578 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/31c848d8-f580-49c9-b556-6ef0ec189a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568598 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/39522b94-65f1-4c3b-a2ff-855637a38628-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xwh8h\" (UID: \"39522b94-65f1-4c3b-a2ff-855637a38628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59flv\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-kube-api-access-59flv\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568641 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adaae06-a6aa-4040-9a14-5490cd58b1d9-config-volume\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-mountpoint-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568671 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581ac37d-89b4-46d1-b607-4f98100b56bc-certs\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568686 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3082d4f7-cc74-4257-9ed3-75c159fe22c1-config\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-bound-sa-token\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568722 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mz7\" (UniqueName: \"kubernetes.io/projected/f563f377-261e-4008-b71c-9840fe7f84a7-kube-api-access-27mz7\") pod \"package-server-manager-789f6589d5-xwwn6\" (UID: \"f563f377-261e-4008-b71c-9840fe7f84a7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-metrics-certs\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3082d4f7-cc74-4257-9ed3-75c159fe22c1-serving-cert\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568901 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-client\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-csi-data-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrzn\" (UniqueName: \"kubernetes.io/projected/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-kube-api-access-pnrzn\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.568988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-service-ca\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.569019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-config\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.569048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.569082 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.569109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-tls\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.569139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxfcq\" (UniqueName: \"kubernetes.io/projected/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-kube-api-access-wxfcq\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.569212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31c848d8-f580-49c9-b556-6ef0ec189a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.571383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.571846 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a87db66-197c-4eda-83d1-984d3b0957e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.571894 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.572262 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-certificates\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.572669 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-ca\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.579977 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-service-ca\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.580442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-config\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.581081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfxt\" (UniqueName: \"kubernetes.io/projected/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-kube-api-access-8gfxt\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.581153 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db60d6c8-a99c-4f6c-ab83-ad21555c3586-metrics-tls\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.581172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/853f490f-cc09-40aa-bde6-6f3f02e63098-srv-cert\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.581196 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvfc\" (UniqueName: \"kubernetes.io/projected/ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2-kube-api-access-mdvfc\") pod \"ingress-canary-scqpl\" (UID: \"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2\") " pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.582524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.584538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlw8\" (UniqueName: \"kubernetes.io/projected/921d55d1-a229-423a-a84f-c727ecd214a4-kube-api-access-rtlw8\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.584573 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55716cbf-9383-4bb0-806a-09ae626c5e9f-tmpfs\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.584621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f563f377-261e-4008-b71c-9840fe7f84a7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xwwn6\" (UID: \"f563f377-261e-4008-b71c-9840fe7f84a7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585574 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6vw8\" (UniqueName: \"kubernetes.io/projected/39522b94-65f1-4c3b-a2ff-855637a38628-kube-api-access-s6vw8\") pod \"multus-admission-controller-857f4d67dd-xwh8h\" (UID: \"39522b94-65f1-4c3b-a2ff-855637a38628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585629 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqlz\" (UniqueName: \"kubernetes.io/projected/035ca985-1000-4c28-aece-3c46abf07371-kube-api-access-2pqlz\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585649 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzst\" (UniqueName: \"kubernetes.io/projected/89c07828-a1a4-4261-b744-fec105f01000-kube-api-access-mbzst\") pod \"control-plane-machine-set-operator-78cbb6b69f-n86m7\" (UID: \"89c07828-a1a4-4261-b744-fec105f01000\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-plugins-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585685 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-trusted-ca\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585747 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2-cert\") pod \"ingress-canary-scqpl\" (UID: \"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2\") " pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585797 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51531dfd-5912-48fc-9648-b87a47679e7d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585815 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndff\" (UniqueName: \"kubernetes.io/projected/db60d6c8-a99c-4f6c-ab83-ad21555c3586-kube-api-access-6ndff\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.585853 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581ac37d-89b4-46d1-b607-4f98100b56bc-node-bootstrap-token\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.587885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-trusted-ca\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.588879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.589406 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.089390412 +0000 UTC m=+137.965407629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.589659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51531dfd-5912-48fc-9648-b87a47679e7d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.589691 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjh68\" (UniqueName: \"kubernetes.io/projected/581ac37d-89b4-46d1-b607-4f98100b56bc-kube-api-access-qjh68\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.589711 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c07828-a1a4-4261-b744-fec105f01000-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n86m7\" (UID: \"89c07828-a1a4-4261-b744-fec105f01000\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.589775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdwml\" (UniqueName: \"kubernetes.io/projected/6a87db66-197c-4eda-83d1-984d3b0957e8-kube-api-access-gdwml\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.590072 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db60d6c8-a99c-4f6c-ab83-ad21555c3586-config-volume\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.590152 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a87db66-197c-4eda-83d1-984d3b0957e8-proxy-tls\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.590179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-socket-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.590745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5nx6\" (UniqueName: \"kubernetes.io/projected/6adaae06-a6aa-4040-9a14-5490cd58b1d9-kube-api-access-j5nx6\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.590945 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/035ca985-1000-4c28-aece-3c46abf07371-service-ca-bundle\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.591006 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qp7t\" (UniqueName: \"kubernetes.io/projected/853f490f-cc09-40aa-bde6-6f3f02e63098-kube-api-access-9qp7t\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.591068 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31c848d8-f580-49c9-b556-6ef0ec189a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.591091 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-default-certificate\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.591928 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-serving-cert\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.592574 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/035ca985-1000-4c28-aece-3c46abf07371-service-ca-bundle\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.593312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31c848d8-f580-49c9-b556-6ef0ec189a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.593357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp8ms\" (UniqueName: \"kubernetes.io/projected/9e2dad3c-1039-41f9-9df2-633a0d146b52-kube-api-access-hp8ms\") pod \"migrator-59844c95c7-6774s\" (UID: \"9e2dad3c-1039-41f9-9df2-633a0d146b52\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.595394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51531dfd-5912-48fc-9648-b87a47679e7d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.600838 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l56xp"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.610702 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/31c848d8-f580-49c9-b556-6ef0ec189a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.613596 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.613741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a87db66-197c-4eda-83d1-984d3b0957e8-proxy-tls\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.614259 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.620475 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-etcd-client\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.621166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51531dfd-5912-48fc-9648-b87a47679e7d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.622094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2lv\" (UniqueName: \"kubernetes.io/projected/51531dfd-5912-48fc-9648-b87a47679e7d-kube-api-access-vz2lv\") pod \"openshift-controller-manager-operator-756b6f6bc6-pmpz6\" (UID: \"51531dfd-5912-48fc-9648-b87a47679e7d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.622854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-metrics-certs\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.627016 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.627400 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-default-certificate\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.636072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/035ca985-1000-4c28-aece-3c46abf07371-stats-auth\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.639145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.641852 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-srv-cert\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.646904 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.647364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-tls\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.648998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxfcq\" (UniqueName: \"kubernetes.io/projected/3379b8f6-c1d9-4a16-8324-ed6c67d3fd30-kube-api-access-wxfcq\") pod \"etcd-operator-b45778765-7vd5t\" (UID: \"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.656660 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.663747 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.666508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n54p\" (UniqueName: \"kubernetes.io/projected/31c848d8-f580-49c9-b556-6ef0ec189a51-kube-api-access-6n54p\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.690866 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrzn\" (UniqueName: \"kubernetes.io/projected/ec7bdc96-ccda-46bc-9e52-a76dc10999e5-kube-api-access-pnrzn\") pod \"catalog-operator-68c6474976-c9c57\" (UID: \"ec7bdc96-ccda-46bc-9e52-a76dc10999e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695325 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55716cbf-9383-4bb0-806a-09ae626c5e9f-tmpfs\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695474 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f563f377-261e-4008-b71c-9840fe7f84a7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xwwn6\" (UID: \"f563f377-261e-4008-b71c-9840fe7f84a7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695491 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6vw8\" (UniqueName: \"kubernetes.io/projected/39522b94-65f1-4c3b-a2ff-855637a38628-kube-api-access-s6vw8\") pod \"multus-admission-controller-857f4d67dd-xwh8h\" (UID: \"39522b94-65f1-4c3b-a2ff-855637a38628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbzst\" (UniqueName: \"kubernetes.io/projected/89c07828-a1a4-4261-b744-fec105f01000-kube-api-access-mbzst\") pod \"control-plane-machine-set-operator-78cbb6b69f-n86m7\" (UID: \"89c07828-a1a4-4261-b744-fec105f01000\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695526 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-plugins-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2-cert\") pod \"ingress-canary-scqpl\" (UID: \"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2\") " pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ndff\" (UniqueName: \"kubernetes.io/projected/db60d6c8-a99c-4f6c-ab83-ad21555c3586-kube-api-access-6ndff\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581ac37d-89b4-46d1-b607-4f98100b56bc-node-bootstrap-token\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695586 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjh68\" (UniqueName: \"kubernetes.io/projected/581ac37d-89b4-46d1-b607-4f98100b56bc-kube-api-access-qjh68\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c07828-a1a4-4261-b744-fec105f01000-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n86m7\" (UID: \"89c07828-a1a4-4261-b744-fec105f01000\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db60d6c8-a99c-4f6c-ab83-ad21555c3586-config-volume\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5nx6\" (UniqueName: \"kubernetes.io/projected/6adaae06-a6aa-4040-9a14-5490cd58b1d9-kube-api-access-j5nx6\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-socket-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695680 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qp7t\" (UniqueName: \"kubernetes.io/projected/853f490f-cc09-40aa-bde6-6f3f02e63098-kube-api-access-9qp7t\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rxj\" (UniqueName: \"kubernetes.io/projected/fe55b40c-eb22-441f-ac88-98cf1199f515-kube-api-access-99rxj\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695749 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb54n\" (UniqueName: \"kubernetes.io/projected/55716cbf-9383-4bb0-806a-09ae626c5e9f-kube-api-access-wb54n\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695772 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/853f490f-cc09-40aa-bde6-6f3f02e63098-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695802 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6adaae06-a6aa-4040-9a14-5490cd58b1d9-secret-volume\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55716cbf-9383-4bb0-806a-09ae626c5e9f-webhook-cert\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695828 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695845 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-registration-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55716cbf-9383-4bb0-806a-09ae626c5e9f-apiservice-cert\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695873 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkccw\" (UniqueName: \"kubernetes.io/projected/3082d4f7-cc74-4257-9ed3-75c159fe22c1-kube-api-access-hkccw\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/39522b94-65f1-4c3b-a2ff-855637a38628-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xwh8h\" (UID: \"39522b94-65f1-4c3b-a2ff-855637a38628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adaae06-a6aa-4040-9a14-5490cd58b1d9-config-volume\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-mountpoint-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581ac37d-89b4-46d1-b607-4f98100b56bc-certs\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3082d4f7-cc74-4257-9ed3-75c159fe22c1-config\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mz7\" (UniqueName: \"kubernetes.io/projected/f563f377-261e-4008-b71c-9840fe7f84a7-kube-api-access-27mz7\") pod \"package-server-manager-789f6589d5-xwwn6\" (UID: \"f563f377-261e-4008-b71c-9840fe7f84a7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.695994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3082d4f7-cc74-4257-9ed3-75c159fe22c1-serving-cert\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.696027 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-csi-data-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.696051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.696076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db60d6c8-a99c-4f6c-ab83-ad21555c3586-metrics-tls\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.696092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/853f490f-cc09-40aa-bde6-6f3f02e63098-srv-cert\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.696112 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdvfc\" (UniqueName: \"kubernetes.io/projected/ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2-kube-api-access-mdvfc\") pod \"ingress-canary-scqpl\" (UID: \"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2\") " pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.696302 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.196269611 +0000 UTC m=+138.072286758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.696640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55716cbf-9383-4bb0-806a-09ae626c5e9f-tmpfs\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.708888 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.716978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581ac37d-89b4-46d1-b607-4f98100b56bc-certs\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.717619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3082d4f7-cc74-4257-9ed3-75c159fe22c1-config\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.723764 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3082d4f7-cc74-4257-9ed3-75c159fe22c1-serving-cert\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.728611 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-mountpoint-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.729079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f563f377-261e-4008-b71c-9840fe7f84a7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xwwn6\" (UID: \"f563f377-261e-4008-b71c-9840fe7f84a7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.729603 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6adaae06-a6aa-4040-9a14-5490cd58b1d9-secret-volume\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.729689 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-csi-data-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.730023 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.734983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2-cert\") pod \"ingress-canary-scqpl\" (UID: \"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2\") " pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.735461 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55716cbf-9383-4bb0-806a-09ae626c5e9f-webhook-cert\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.736697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-socket-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.738095 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-registration-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.740573 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.741383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31c848d8-f580-49c9-b556-6ef0ec189a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lj9zx\" (UID: \"31c848d8-f580-49c9-b556-6ef0ec189a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.741819 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db60d6c8-a99c-4f6c-ab83-ad21555c3586-metrics-tls\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.742323 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.742778 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55716cbf-9383-4bb0-806a-09ae626c5e9f-apiservice-cert\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.742918 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db60d6c8-a99c-4f6c-ab83-ad21555c3586-config-volume\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.744002 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adaae06-a6aa-4040-9a14-5490cd58b1d9-config-volume\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.747428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe55b40c-eb22-441f-ac88-98cf1199f515-plugins-dir\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.752578 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/853f490f-cc09-40aa-bde6-6f3f02e63098-srv-cert\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.756328 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/39522b94-65f1-4c3b-a2ff-855637a38628-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xwh8h\" (UID: \"39522b94-65f1-4c3b-a2ff-855637a38628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.759450 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c07828-a1a4-4261-b744-fec105f01000-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n86m7\" (UID: \"89c07828-a1a4-4261-b744-fec105f01000\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.762411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/853f490f-cc09-40aa-bde6-6f3f02e63098-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.762766 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581ac37d-89b4-46d1-b607-4f98100b56bc-node-bootstrap-token\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.763517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-bound-sa-token\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.765745 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59flv\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-kube-api-access-59flv\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.768612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlw8\" (UniqueName: \"kubernetes.io/projected/921d55d1-a229-423a-a84f-c727ecd214a4-kube-api-access-rtlw8\") pod \"marketplace-operator-79b997595-84sk9\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.794326 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q627j"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.796906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.797189 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.297176544 +0000 UTC m=+138.173193691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.811606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqlz\" (UniqueName: \"kubernetes.io/projected/035ca985-1000-4c28-aece-3c46abf07371-kube-api-access-2pqlz\") pod \"router-default-5444994796-px76m\" (UID: \"035ca985-1000-4c28-aece-3c46abf07371\") " pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.818327 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdwml\" (UniqueName: \"kubernetes.io/projected/6a87db66-197c-4eda-83d1-984d3b0957e8-kube-api-access-gdwml\") pod \"machine-config-controller-84d6567774-rkc7h\" (UID: \"6a87db66-197c-4eda-83d1-984d3b0957e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:46 crc kubenswrapper[4796]: W1212 04:35:46.834693 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83c81be8_f61a_4bb0_b4b2_26dd509a7d9e.slice/crio-872c946f375547a7b917242e39e7a20a902cc55082d6caaf037d97812dc2dbf9 WatchSource:0}: Error finding container 872c946f375547a7b917242e39e7a20a902cc55082d6caaf037d97812dc2dbf9: Status 404 returned error can't find the container with id 872c946f375547a7b917242e39e7a20a902cc55082d6caaf037d97812dc2dbf9 Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.837206 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.838971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfxt\" (UniqueName: \"kubernetes.io/projected/0a52a2d1-d7e3-45ec-96c5-f54272b92a68-kube-api-access-8gfxt\") pod \"openshift-apiserver-operator-796bbdcf4f-c7j4c\" (UID: \"0a52a2d1-d7e3-45ec-96c5-f54272b92a68\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.847749 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp8ms\" (UniqueName: \"kubernetes.io/projected/9e2dad3c-1039-41f9-9df2-633a0d146b52-kube-api-access-hp8ms\") pod \"migrator-59844c95c7-6774s\" (UID: \"9e2dad3c-1039-41f9-9df2-633a0d146b52\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.874160 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gnxjc"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.898689 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.899210 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4tb6c"] Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.899340 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.399317886 +0000 UTC m=+138.275335033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.900416 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.900060 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdvfc\" (UniqueName: \"kubernetes.io/projected/ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2-kube-api-access-mdvfc\") pod \"ingress-canary-scqpl\" (UID: \"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2\") " pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:46 crc kubenswrapper[4796]: E1212 04:35:46.900782 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.400770221 +0000 UTC m=+138.276787368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.921922 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n9d9v"] Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.927193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mz7\" (UniqueName: \"kubernetes.io/projected/f563f377-261e-4008-b71c-9840fe7f84a7-kube-api-access-27mz7\") pod \"package-server-manager-789f6589d5-xwwn6\" (UID: \"f563f377-261e-4008-b71c-9840fe7f84a7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.930860 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6vw8\" (UniqueName: \"kubernetes.io/projected/39522b94-65f1-4c3b-a2ff-855637a38628-kube-api-access-s6vw8\") pod \"multus-admission-controller-857f4d67dd-xwh8h\" (UID: \"39522b94-65f1-4c3b-a2ff-855637a38628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.940519 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.950912 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbzst\" (UniqueName: \"kubernetes.io/projected/89c07828-a1a4-4261-b744-fec105f01000-kube-api-access-mbzst\") pod \"control-plane-machine-set-operator-78cbb6b69f-n86m7\" (UID: \"89c07828-a1a4-4261-b744-fec105f01000\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.958101 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" Dec 12 04:35:46 crc kubenswrapper[4796]: I1212 04:35:46.973296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5nx6\" (UniqueName: \"kubernetes.io/projected/6adaae06-a6aa-4040-9a14-5490cd58b1d9-kube-api-access-j5nx6\") pod \"collect-profiles-29425230-zm2tg\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.004007 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.004150 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.004341 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.504324597 +0000 UTC m=+138.380341744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.004907 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ndff\" (UniqueName: \"kubernetes.io/projected/db60d6c8-a99c-4f6c-ab83-ad21555c3586-kube-api-access-6ndff\") pod \"dns-default-2mk57\" (UID: \"db60d6c8-a99c-4f6c-ab83-ad21555c3586\") " pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.028256 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rxj\" (UniqueName: \"kubernetes.io/projected/fe55b40c-eb22-441f-ac88-98cf1199f515-kube-api-access-99rxj\") pod \"csi-hostpathplugin-jfpqc\" (UID: \"fe55b40c-eb22-441f-ac88-98cf1199f515\") " pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.031300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qp7t\" (UniqueName: \"kubernetes.io/projected/853f490f-cc09-40aa-bde6-6f3f02e63098-kube-api-access-9qp7t\") pod \"olm-operator-6b444d44fb-g7r59\" (UID: \"853f490f-cc09-40aa-bde6-6f3f02e63098\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.040363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.053801 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.061331 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjh68\" (UniqueName: \"kubernetes.io/projected/581ac37d-89b4-46d1-b607-4f98100b56bc-kube-api-access-qjh68\") pod \"machine-config-server-rk7sn\" (UID: \"581ac37d-89b4-46d1-b607-4f98100b56bc\") " pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.079732 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkccw\" (UniqueName: \"kubernetes.io/projected/3082d4f7-cc74-4257-9ed3-75c159fe22c1-kube-api-access-hkccw\") pod \"service-ca-operator-777779d784-8gp4h\" (UID: \"3082d4f7-cc74-4257-9ed3-75c159fe22c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.084409 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.094401 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.096236 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.121710 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.104103 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.097686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.111947 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.114077 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.614057866 +0000 UTC m=+138.490075013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.133964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d03f4ca9-4aec-4319-bf66-6c96db6dbe9b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wg995\" (UID: \"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.134049 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.106153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.140909 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.148882 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb54n\" (UniqueName: \"kubernetes.io/projected/55716cbf-9383-4bb0-806a-09ae626c5e9f-kube-api-access-wb54n\") pod \"packageserver-d55dfcdfc-7flkv\" (UID: \"55716cbf-9383-4bb0-806a-09ae626c5e9f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.152949 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.164554 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rk7sn" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.169517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.175935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-scqpl" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.245207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.245750 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.745730057 +0000 UTC m=+138.621747204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.350910 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.351555 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.851539643 +0000 UTC m=+138.727556790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.395742 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:47 crc kubenswrapper[4796]: W1212 04:35:47.400656 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa42ca16_b6a5_4bdb_a0de_c0887b77bf61.slice/crio-d04c48eb06899d4b266aec6748b1c2244c197edd102e99491515dd35764355b1 WatchSource:0}: Error finding container d04c48eb06899d4b266aec6748b1c2244c197edd102e99491515dd35764355b1: Status 404 returned error can't find the container with id d04c48eb06899d4b266aec6748b1c2244c197edd102e99491515dd35764355b1 Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.402153 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" event={"ID":"32b6def7-556e-45e2-ae14-84211e7da580","Type":"ContainerStarted","Data":"c670b59a28198b26ccf0e3621b76053a65031d8631604ffbf0e10742a93d04c5"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.420440 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" event={"ID":"b6ea1627-29bb-4e98-8ed3-10fe828c7b80","Type":"ContainerStarted","Data":"51113ecbb52c5990f627d9d9d4729eae093ccb8f49c2e555e13b94e918284adf"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.420705 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.451228 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" event={"ID":"6aa5275c-32ff-433c-bdbb-0e4c152224b8","Type":"ContainerStarted","Data":"37eb0928d22956ed599bf266603e91c708f40c1cff0082a36e7a6682678149fe"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.451269 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.452601 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.452917 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:47.952903281 +0000 UTC m=+138.828920418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.492170 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4tvxf" event={"ID":"00bcefc1-0041-4c8e-836f-f1abaa3eb344","Type":"ContainerStarted","Data":"637c27af1ad4d922b7cd59a6b03c79d323a4820735d835602701326e3839a8a5"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.492212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4tvxf" event={"ID":"00bcefc1-0041-4c8e-836f-f1abaa3eb344","Type":"ContainerStarted","Data":"8f5943f487008d94a4910b50583e1e8dc77ad82687a448584ee0cc8adfcc8b5b"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.533892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" event={"ID":"8e05fbfb-ba4c-465c-94a2-49f666f39c02","Type":"ContainerStarted","Data":"f68623277e58755c68e959e052d1d9223ea248d474b45583175752bca5b07459"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.553657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.554005 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.05399332 +0000 UTC m=+138.930010467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.559697 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" event={"ID":"cd42823a-83a6-4a22-bec9-8cd20753bdb1","Type":"ContainerStarted","Data":"9b0ece05345c858b8b63c8f3b002b353e9a738b2bcfe4664988a779255a3d4c2"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.559750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" event={"ID":"cd42823a-83a6-4a22-bec9-8cd20753bdb1","Type":"ContainerStarted","Data":"dd5961e69a10ba63bf8bd2f1b44940e2ad4bcb39ac31779cb0a23d821f06ce48"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.560087 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.594969 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" event={"ID":"1e8bbd2f-76b0-4f93-96f7-96f4c152838b","Type":"ContainerStarted","Data":"84cae7a1dc898d04abddc89fee4e4f3fd78b0484a9005cf4b76050d4c586fbbd"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.658776 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.659031 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.158985201 +0000 UTC m=+139.035002348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.659494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.660777 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.160756426 +0000 UTC m=+139.036773573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.676412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" event={"ID":"55aa08f3-dce6-4268-a735-fd3a5e10fd77","Type":"ContainerStarted","Data":"f723a921d93f2ae1823bd8ec557ce0d6c1b669f464e8f727a4bf4b3768a869d0"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.676466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" event={"ID":"55aa08f3-dce6-4268-a735-fd3a5e10fd77","Type":"ContainerStarted","Data":"954252b74338fc3e7404561774930046dbdc6e228eabf3b086dc57fb611c9bed"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.703201 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7vd5t"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.711928 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.712503 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l56xp" event={"ID":"8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e","Type":"ContainerStarted","Data":"df6ffc7a22fb35968f3a13f87585518679588bd2c912a1e9f2d80c14f67adf6f"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.712546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l56xp" event={"ID":"8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e","Type":"ContainerStarted","Data":"7715c08179139732e0c2b812ef802decdb5ca501a7a8191e2f99531318836a93"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.713311 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.721319 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" event={"ID":"ada430eb-6dc8-4516-87df-5dbdc97b5563","Type":"ContainerStarted","Data":"056c896a8c3fc97f2d6b96aebd73d476e997e97d1555884adeeb0d43e3e14df7"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.727790 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-l56xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.727834 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l56xp" podUID="8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.756105 4796 generic.go:334] "Generic (PLEG): container finished" podID="82432f4b-5d7d-4b20-9cc7-daacc71964d2" containerID="89c1571d07a8cc0b8376a185d7d2753ea6b64d6691f51675618677ccb4912039" exitCode=0 Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.756355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" event={"ID":"82432f4b-5d7d-4b20-9cc7-daacc71964d2","Type":"ContainerDied","Data":"89c1571d07a8cc0b8376a185d7d2753ea6b64d6691f51675618677ccb4912039"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.756414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" event={"ID":"82432f4b-5d7d-4b20-9cc7-daacc71964d2","Type":"ContainerStarted","Data":"ba45db1e77c6fc4c4f8026fa7913793409a4509f1c4483231629da6da1263316"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.760863 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.761755 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.261739732 +0000 UTC m=+139.137756879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.811098 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" event={"ID":"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca","Type":"ContainerStarted","Data":"6848671cf4176973c4cc3b4358e7093fd32255368cd18dae3450f16a37870ae9"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.821234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q627j" event={"ID":"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e","Type":"ContainerStarted","Data":"872c946f375547a7b917242e39e7a20a902cc55082d6caaf037d97812dc2dbf9"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.828232 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" event={"ID":"59c55f34-076c-445a-bd67-836624d9a968","Type":"ContainerStarted","Data":"852a66a9d79f47798ce8a41b520dacf9eecf2d055b588f06b31aab55fb8cdf9a"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.843831 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" event={"ID":"4982303d-d471-4a21-a85f-4fd2ce6d3481","Type":"ContainerStarted","Data":"fa8dd51046dae3cb68e30b17cd2c159ea2c97552d5c8908b5f476a7a78d95e1d"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.861904 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" event={"ID":"9076f897-29ef-41d6-9cb0-d89f24362c0b","Type":"ContainerStarted","Data":"c9359df0e230ca2e7ebae284413fe0c949b077dbf969b11c13f53b44de0f50b0"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.862665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.863515 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.363502011 +0000 UTC m=+139.239519158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.864034 4796 generic.go:334] "Generic (PLEG): container finished" podID="12948a9c-fd3a-429c-bb98-e3d449208beb" containerID="ac1e0219be4218059edb6deaba032c5b872f174d01a8cb5b9305610eb21fb2b0" exitCode=0 Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.864580 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" event={"ID":"12948a9c-fd3a-429c-bb98-e3d449208beb","Type":"ContainerDied","Data":"ac1e0219be4218059edb6deaba032c5b872f174d01a8cb5b9305610eb21fb2b0"} Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.906978 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.907163 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.919492 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.942062 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s"] Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.965603 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.965757 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.465731307 +0000 UTC m=+139.341748454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:47 crc kubenswrapper[4796]: I1212 04:35:47.966148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:47 crc kubenswrapper[4796]: E1212 04:35:47.966505 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.46649145 +0000 UTC m=+139.342508598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.075614 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.075967 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.57595254 +0000 UTC m=+139.451969687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: W1212 04:35:48.125838 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e2dad3c_1039_41f9_9df2_633a0d146b52.slice/crio-fd0719e44673d81ff6252776274504ad5293461d0aafc200425ad21793c7a9a8 WatchSource:0}: Error finding container fd0719e44673d81ff6252776274504ad5293461d0aafc200425ad21793c7a9a8: Status 404 returned error can't find the container with id fd0719e44673d81ff6252776274504ad5293461d0aafc200425ad21793c7a9a8 Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.179836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.180158 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.680145435 +0000 UTC m=+139.556162582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.262255 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4rc6c" podStartSLOduration=117.262242853 podStartE2EDuration="1m57.262242853s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:48.25958055 +0000 UTC m=+139.135597697" watchObservedRunningTime="2025-12-12 04:35:48.262242853 +0000 UTC m=+139.138260000" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.263704 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx"] Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.303474 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.303798 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.803782567 +0000 UTC m=+139.679799714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.418611 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.419185 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:48.919174962 +0000 UTC m=+139.795192109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.509296 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dgpwf" podStartSLOduration=117.509265189 podStartE2EDuration="1m57.509265189s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:48.49840543 +0000 UTC m=+139.374422577" watchObservedRunningTime="2025-12-12 04:35:48.509265189 +0000 UTC m=+139.385282336" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.520119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.520496 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.020481218 +0000 UTC m=+139.896498365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.568448 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l56xp" podStartSLOduration=117.568431362 podStartE2EDuration="1m57.568431362s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:48.533444991 +0000 UTC m=+139.409462138" watchObservedRunningTime="2025-12-12 04:35:48.568431362 +0000 UTC m=+139.444448509" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.568997 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" podStartSLOduration=117.568991918 podStartE2EDuration="1m57.568991918s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:48.567380489 +0000 UTC m=+139.443397646" watchObservedRunningTime="2025-12-12 04:35:48.568991918 +0000 UTC m=+139.445009065" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.622037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.623609 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.12359724 +0000 UTC m=+139.999614387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.630452 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-45hnd" podStartSLOduration=116.630418583 podStartE2EDuration="1m56.630418583s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:48.630055561 +0000 UTC m=+139.506072708" watchObservedRunningTime="2025-12-12 04:35:48.630418583 +0000 UTC m=+139.506435730" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.681991 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.723227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.724268 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.224252456 +0000 UTC m=+140.100269603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.732212 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg"] Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.817972 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" podStartSLOduration=116.817952965 podStartE2EDuration="1m56.817952965s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:48.816798298 +0000 UTC m=+139.692815445" watchObservedRunningTime="2025-12-12 04:35:48.817952965 +0000 UTC m=+139.693970112" Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.827422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.827702 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.327689128 +0000 UTC m=+140.203706275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.882059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" event={"ID":"ec7bdc96-ccda-46bc-9e52-a76dc10999e5","Type":"ContainerStarted","Data":"d472aa732ade6297430cbfabea4af256a9e7ce0ee86d1e49bfa3b91d14062221"} Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.882834 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" event={"ID":"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30","Type":"ContainerStarted","Data":"06e5cb96c09e919a8b607323a60ffdc735bcc227370a5da9b331a709881a932c"} Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.887160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" event={"ID":"6aa5275c-32ff-433c-bdbb-0e4c152224b8","Type":"ContainerStarted","Data":"9d7c603bebca39314bd507c2f90e7f876f092840c9d032e55532fa8770e4c06f"} Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.888304 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rk7sn" event={"ID":"581ac37d-89b4-46d1-b607-4f98100b56bc","Type":"ContainerStarted","Data":"0e083b8e9b66437ea2d7f586da40d90b1c2ab95910923544cb4ec508b084b015"} Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.895867 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" event={"ID":"31c848d8-f580-49c9-b556-6ef0ec189a51","Type":"ContainerStarted","Data":"0db743a05c88afe8190c84188fb2f50f0377bda3449ddd5b2b66b99da209888c"} Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.930094 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" event={"ID":"9e2dad3c-1039-41f9-9df2-633a0d146b52","Type":"ContainerStarted","Data":"fd0719e44673d81ff6252776274504ad5293461d0aafc200425ad21793c7a9a8"} Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.930397 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.930559 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.430518271 +0000 UTC m=+140.306535418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:48 crc kubenswrapper[4796]: I1212 04:35:48.930606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:48 crc kubenswrapper[4796]: E1212 04:35:48.931009 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.430995706 +0000 UTC m=+140.307012853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.034980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.035271 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.535256864 +0000 UTC m=+140.411274011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.039267 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.115454 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" event={"ID":"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61","Type":"ContainerStarted","Data":"d04c48eb06899d4b266aec6748b1c2244c197edd102e99491515dd35764355b1"} Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.135074 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" event={"ID":"9076f897-29ef-41d6-9cb0-d89f24362c0b","Type":"ContainerStarted","Data":"8f7e19b168bd98b0f98c72e24323cdd0cb3e143350b80d4043abb593fc3ed38f"} Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.136433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.138170 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.638155039 +0000 UTC m=+140.514172186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.146588 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4tvxf" podStartSLOduration=118.146571171 podStartE2EDuration="1m58.146571171s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:49.078169081 +0000 UTC m=+139.954186238" watchObservedRunningTime="2025-12-12 04:35:49.146571171 +0000 UTC m=+140.022588328" Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.167141 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" event={"ID":"51531dfd-5912-48fc-9648-b87a47679e7d","Type":"ContainerStarted","Data":"be8bec13622fb644dca40cee1d95e664f0519fcabc81b086b1c129fc46314e3b"} Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.236601 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.237243 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.237342 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.737323739 +0000 UTC m=+140.613340886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.238990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.239414 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.739404653 +0000 UTC m=+140.615421800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.245451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-px76m" event={"ID":"035ca985-1000-4c28-aece-3c46abf07371","Type":"ContainerStarted","Data":"095d050f7d54f52708752a8d2162d9dafee7dcbd00578f7f4502f364b53d0271"} Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.261661 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q627j" event={"ID":"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e","Type":"ContainerStarted","Data":"9ce5d9f57896de55addf0bb02360df6e29062b14b2a3fc3d061893b991182c1b"} Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.262424 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-l56xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.262464 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l56xp" podUID="8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.270408 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.274516 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-scqpl"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.340802 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.342025 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.842010079 +0000 UTC m=+140.718027226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.387458 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.401913 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-84sk9"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.442989 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.443394 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:49.943372657 +0000 UTC m=+140.819389884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.543645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m5vc2" podStartSLOduration=117.5436275 podStartE2EDuration="1m57.5436275s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:49.469697647 +0000 UTC m=+140.345714794" watchObservedRunningTime="2025-12-12 04:35:49.5436275 +0000 UTC m=+140.419644647" Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.544751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.544974 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.044957652 +0000 UTC m=+140.920974789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.587302 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6"] Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.645968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.646456 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.146442323 +0000 UTC m=+141.022459470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.750177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.763976 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.263942094 +0000 UTC m=+141.139959241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.865594 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.865934 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.36592223 +0000 UTC m=+141.241939367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:49 crc kubenswrapper[4796]: I1212 04:35:49.966772 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:49 crc kubenswrapper[4796]: E1212 04:35:49.967032 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.467018869 +0000 UTC m=+141.343036016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.071544 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.072252 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.572238748 +0000 UTC m=+141.448255905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.178572 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.178878 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.678863068 +0000 UTC m=+141.554880215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.206680 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xwh8h"] Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.279683 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.280172 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.780160464 +0000 UTC m=+141.656177601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: W1212 04:35:50.294398 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39522b94_65f1_4c3b_a2ff_855637a38628.slice/crio-5c1743e5440ba5b095518aebde2bfc92386bbe3fd718c864bfd58af0f81777e0 WatchSource:0}: Error finding container 5c1743e5440ba5b095518aebde2bfc92386bbe3fd718c864bfd58af0f81777e0: Status 404 returned error can't find the container with id 5c1743e5440ba5b095518aebde2bfc92386bbe3fd718c864bfd58af0f81777e0 Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.336243 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jfpqc"] Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.344739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" event={"ID":"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61","Type":"ContainerStarted","Data":"aa9dd6ab6069d9422f796e0914b3d0ea7f3be64f7c8db27295113a2ae5bff2fe"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.351697 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" event={"ID":"921d55d1-a229-423a-a84f-c727ecd214a4","Type":"ContainerStarted","Data":"3c881a38c7c03b7f595e21c8f250859f8e68cd99d07d327d4b166f2475dafe1c"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.355434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" event={"ID":"6a87db66-197c-4eda-83d1-984d3b0957e8","Type":"ContainerStarted","Data":"c7e64eeca6d45a3668243a04bb426a3b16cc1b8b1e0ea007fdbe81ee063a5e2c"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.355469 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2mk57"] Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.367247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" event={"ID":"55aa08f3-dce6-4268-a735-fd3a5e10fd77","Type":"ContainerStarted","Data":"942793637c76d7716e78fff0844ee133b3987dc57a3319f3bd876fd97c685920"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.369957 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59"] Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.381573 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.381838 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.881821711 +0000 UTC m=+141.757838858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.400432 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c"] Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.411292 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" event={"ID":"ada430eb-6dc8-4516-87df-5dbdc97b5563","Type":"ContainerStarted","Data":"4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.412182 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.413047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" event={"ID":"2bd8f2dd-9f3f-4f41-a4e6-423b159176ca","Type":"ContainerStarted","Data":"7092bd35596cac7c36b8886d6b22e8438de68acd9fa16deb4fd2eb8a40a43a70"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.415758 4796 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4tb6c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.415831 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.432673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" event={"ID":"6adaae06-a6aa-4040-9a14-5490cd58b1d9","Type":"ContainerStarted","Data":"6758a06670cf4724b8cec0ab13ec7c6c6dcb245fe1094b1a77c08889b51450d8"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.433917 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qngxr" podStartSLOduration=119.433900933 podStartE2EDuration="1m59.433900933s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.432890582 +0000 UTC m=+141.308907729" watchObservedRunningTime="2025-12-12 04:35:50.433900933 +0000 UTC m=+141.309918080" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.435540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" event={"ID":"3379b8f6-c1d9-4a16-8324-ed6c67d3fd30","Type":"ContainerStarted","Data":"2f07026e3fb8c608b81a0a4133ac796fc2515dc6c3673281f211db4e211f54e0"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.446998 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-scqpl" event={"ID":"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2","Type":"ContainerStarted","Data":"c3d0ac13ef2c6f98427bf2de744460fa66986d97bc73f2ec9228fcff85800fcd"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.450704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" event={"ID":"9e2dad3c-1039-41f9-9df2-633a0d146b52","Type":"ContainerStarted","Data":"54982c54142f2a75b875b8e36e0e7284a5125851e5dcdaba5bbc6feaae8931d1"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.458762 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" event={"ID":"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d","Type":"ContainerStarted","Data":"3ac7c88050d214d7b7a269d5cb6e3a172a89818f10884b5aacd994e34e7ad4a2"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.471443 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rk7sn" event={"ID":"581ac37d-89b4-46d1-b607-4f98100b56bc","Type":"ContainerStarted","Data":"54cf2abf59842d3c069ca20e04a1f91b83807f0a6916ef7a15dacd89b3c9ed4d"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.483066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.483411 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:50.983400036 +0000 UTC m=+141.859417183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.496645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" event={"ID":"51531dfd-5912-48fc-9648-b87a47679e7d","Type":"ContainerStarted","Data":"1837fd461967f09142eea70f537ce93e23e035a5bf6a8334656d306ace7b1925"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.510890 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-px76m" event={"ID":"035ca985-1000-4c28-aece-3c46abf07371","Type":"ContainerStarted","Data":"75f9c2fdb3a2798679ef66d9a06af21d2277a7f1eb5ea88e32fa454e3811bf2c"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.536259 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tdlpn" podStartSLOduration=118.536243732 podStartE2EDuration="1m58.536243732s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.493568182 +0000 UTC m=+141.369585329" watchObservedRunningTime="2025-12-12 04:35:50.536243732 +0000 UTC m=+141.412260879" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.536566 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" podStartSLOduration=119.536561962 podStartE2EDuration="1m59.536561962s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.535040324 +0000 UTC m=+141.411057481" watchObservedRunningTime="2025-12-12 04:35:50.536561962 +0000 UTC m=+141.412579109" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.566673 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7"] Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.574761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" event={"ID":"b6ea1627-29bb-4e98-8ed3-10fe828c7b80","Type":"ContainerStarted","Data":"106888fcc40bf9eef2874f1a2324da89dd38202db0bea82f3948a23e3ee25d4b"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.586715 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.588085 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.088045745 +0000 UTC m=+141.964062932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.597328 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7vd5t" podStartSLOduration=119.597309814 podStartE2EDuration="1m59.597309814s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.595502158 +0000 UTC m=+141.471519305" watchObservedRunningTime="2025-12-12 04:35:50.597309814 +0000 UTC m=+141.473326961" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.603388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" event={"ID":"55716cbf-9383-4bb0-806a-09ae626c5e9f","Type":"ContainerStarted","Data":"6377fccdd7f11505a5f055dddad98a481dc622f086d86384fe6c4660bd291dde"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.672003 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pmpz6" podStartSLOduration=119.67198041 podStartE2EDuration="1m59.67198041s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.635891036 +0000 UTC m=+141.511908183" watchObservedRunningTime="2025-12-12 04:35:50.67198041 +0000 UTC m=+141.547997557" Dec 12 04:35:50 crc kubenswrapper[4796]: W1212 04:35:50.683809 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853f490f_cc09_40aa_bde6_6f3f02e63098.slice/crio-faeb2130c3e34f43e2e551bb24416f9d6ba74d3f5da855a7ce019d69c00da168 WatchSource:0}: Error finding container faeb2130c3e34f43e2e551bb24416f9d6ba74d3f5da855a7ce019d69c00da168: Status 404 returned error can't find the container with id faeb2130c3e34f43e2e551bb24416f9d6ba74d3f5da855a7ce019d69c00da168 Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.690394 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.690829 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.190813787 +0000 UTC m=+142.066830934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.712407 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" event={"ID":"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b","Type":"ContainerStarted","Data":"661bc586863ca71c1c590274af0e288cc751caccf8b206f4d88ee2d988f5de3a"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.729053 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-px76m" podStartSLOduration=119.729035077 podStartE2EDuration="1m59.729035077s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.726257621 +0000 UTC m=+141.602274768" watchObservedRunningTime="2025-12-12 04:35:50.729035077 +0000 UTC m=+141.605052224" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.784249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" event={"ID":"32b6def7-556e-45e2-ae14-84211e7da580","Type":"ContainerStarted","Data":"1ffd6edd1dbce014bec8a59fcd73ed0c978f5b3634e0249c8d476da51d602420"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.795128 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.795775 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.295740285 +0000 UTC m=+142.171757432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.842941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" event={"ID":"82432f4b-5d7d-4b20-9cc7-daacc71964d2","Type":"ContainerStarted","Data":"848cf3345f4e68893e5aeb841fd95435f1e29d6285ada02f4c473b74bfc7c7dd"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.843554 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.846718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" event={"ID":"f563f377-261e-4008-b71c-9840fe7f84a7","Type":"ContainerStarted","Data":"6b8cd6292c9d09193b55a9796abff7c8b77ca58dc026e2c7fcd404261a7da978"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.857032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" event={"ID":"3082d4f7-cc74-4257-9ed3-75c159fe22c1","Type":"ContainerStarted","Data":"14587246020d25334a5c810bf8c3bbab3f747ce377e4960f6e228865a36b38b7"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.860771 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rk7sn" podStartSLOduration=7.860751161 podStartE2EDuration="7.860751161s" podCreationTimestamp="2025-12-12 04:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.790623536 +0000 UTC m=+141.666640683" watchObservedRunningTime="2025-12-12 04:35:50.860751161 +0000 UTC m=+141.736768308" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.893179 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" event={"ID":"59c55f34-076c-445a-bd67-836624d9a968","Type":"ContainerStarted","Data":"99b8ff66ea2ab6bbd0c23460c99734cd7d0f9aa1abaf6203a2e709a7592c760d"} Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.893272 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.893826 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-l56xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.893851 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l56xp" podUID="8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.898536 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-gnxjc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.898592 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" podUID="6aa5275c-32ff-433c-bdbb-0e4c152224b8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.905066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:50 crc kubenswrapper[4796]: E1212 04:35:50.906469 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.406455865 +0000 UTC m=+142.282473012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:50 crc kubenswrapper[4796]: I1212 04:35:50.928431 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" podStartSLOduration=119.928413689 podStartE2EDuration="1m59.928413689s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.857271042 +0000 UTC m=+141.733288179" watchObservedRunningTime="2025-12-12 04:35:50.928413689 +0000 UTC m=+141.804430846" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.005129 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n9d9v" podStartSLOduration=119.005099467 podStartE2EDuration="1m59.005099467s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:50.929932885 +0000 UTC m=+141.805950052" watchObservedRunningTime="2025-12-12 04:35:51.005099467 +0000 UTC m=+141.881116614" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.015121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.016354 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.516335427 +0000 UTC m=+142.392352574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.020304 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.038240 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:51 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:51 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:51 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.038477 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.060224 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" podStartSLOduration=120.060208454 podStartE2EDuration="2m0.060208454s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:51.005588762 +0000 UTC m=+141.881605909" watchObservedRunningTime="2025-12-12 04:35:51.060208454 +0000 UTC m=+141.936225601" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.064825 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" podStartSLOduration=120.064807297 podStartE2EDuration="2m0.064807297s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:51.058658005 +0000 UTC m=+141.934675152" watchObservedRunningTime="2025-12-12 04:35:51.064807297 +0000 UTC m=+141.940824454" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.105574 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72sgg" podStartSLOduration=119.105553616 podStartE2EDuration="1m59.105553616s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:51.098034102 +0000 UTC m=+141.974051249" watchObservedRunningTime="2025-12-12 04:35:51.105553616 +0000 UTC m=+141.981570763" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.122090 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.122718 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.62270556 +0000 UTC m=+142.498722707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.224962 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.225263 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.725247945 +0000 UTC m=+142.601265092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.325981 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.326374 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.826358465 +0000 UTC m=+142.702375612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.426702 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.426817 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.926800384 +0000 UTC m=+142.802817531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.427100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.427520 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:51.927504335 +0000 UTC m=+142.803521482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.527944 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.528251 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.028217752 +0000 UTC m=+142.904234909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.629791 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.630124 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.130107927 +0000 UTC m=+143.006125074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.730669 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.731167 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.231149255 +0000 UTC m=+143.107166402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.832543 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.832968 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.332949636 +0000 UTC m=+143.208966863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.896895 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" event={"ID":"39522b94-65f1-4c3b-a2ff-855637a38628","Type":"ContainerStarted","Data":"5c1743e5440ba5b095518aebde2bfc92386bbe3fd718c864bfd58af0f81777e0"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.898593 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" event={"ID":"12948a9c-fd3a-429c-bb98-e3d449208beb","Type":"ContainerStarted","Data":"1730d21e816bb67a351a368669acb5d414a01e398a1a9251558ecab7bb62875f"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.899579 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" event={"ID":"89c07828-a1a4-4261-b744-fec105f01000","Type":"ContainerStarted","Data":"fdc9cd57eef7d3c7422126350f60d905ca6fc0ec51a5dac15510fa312023d01f"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.900652 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" event={"ID":"fe55b40c-eb22-441f-ac88-98cf1199f515","Type":"ContainerStarted","Data":"fe11d88633c10b4d3ff429c607eaff3f4ed9075ed806b668dbd007e60c51e94e"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.901604 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2mk57" event={"ID":"db60d6c8-a99c-4f6c-ab83-ad21555c3586","Type":"ContainerStarted","Data":"518ffc59fa5e345e422035a3c24345426bf0a0a7d4e998cf877168d175174dee"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.902869 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" event={"ID":"ec7bdc96-ccda-46bc-9e52-a76dc10999e5","Type":"ContainerStarted","Data":"c2582973157de0ab40c1594931eea87a774e5f4d391be10601653422dc885937"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.903087 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.903947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" event={"ID":"0a52a2d1-d7e3-45ec-96c5-f54272b92a68","Type":"ContainerStarted","Data":"6c918cb0ce6884f9427a9482ceaf7eb8adbcc39ea9833f821a47db643c75b6a2"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.904602 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-c9c57 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.904639 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" podUID="ec7bdc96-ccda-46bc-9e52-a76dc10999e5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.914537 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" event={"ID":"55716cbf-9383-4bb0-806a-09ae626c5e9f","Type":"ContainerStarted","Data":"2bd2449510ce849bcdd4696f1514ea815ac0ad4760817b498eb1e8f79e1ccb2a"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.914725 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.915634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" event={"ID":"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d","Type":"ContainerStarted","Data":"a4e404f1ebdbf77818b924ce8459cd4ee4a8fce30b7db209b688b13ee434b936"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.916665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" event={"ID":"853f490f-cc09-40aa-bde6-6f3f02e63098","Type":"ContainerStarted","Data":"faeb2130c3e34f43e2e551bb24416f9d6ba74d3f5da855a7ce019d69c00da168"} Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.916788 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7flkv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.916844 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" podUID="55716cbf-9383-4bb0-806a-09ae626c5e9f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.917243 4796 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4tb6c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.917293 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.923670 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" podStartSLOduration=119.923650251 podStartE2EDuration="1m59.923650251s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:51.921547026 +0000 UTC m=+142.797564173" watchObservedRunningTime="2025-12-12 04:35:51.923650251 +0000 UTC m=+142.799667398" Dec 12 04:35:51 crc kubenswrapper[4796]: I1212 04:35:51.935686 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:51 crc kubenswrapper[4796]: E1212 04:35:51.937515 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.437487842 +0000 UTC m=+143.313505049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.016835 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:52 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:52 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:52 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.017239 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.027315 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" podStartSLOduration=120.02730189 podStartE2EDuration="2m0.02730189s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:51.969988275 +0000 UTC m=+142.846005432" watchObservedRunningTime="2025-12-12 04:35:52.02730189 +0000 UTC m=+142.903319037" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.028626 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" podStartSLOduration=120.028620471 podStartE2EDuration="2m0.028620471s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:52.022721578 +0000 UTC m=+142.898738745" watchObservedRunningTime="2025-12-12 04:35:52.028620471 +0000 UTC m=+142.904637618" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.039771 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gnxjc" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.040014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.040294 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.540270535 +0000 UTC m=+143.416287682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.140549 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.140675 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.640653211 +0000 UTC m=+143.516670358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.141063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.141399 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.641391724 +0000 UTC m=+143.517408871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.241876 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.242173 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.742159623 +0000 UTC m=+143.618176770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.342921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.343538 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.843526881 +0000 UTC m=+143.719544028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.444148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.444789 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:52.944774235 +0000 UTC m=+143.820791382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.547215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.547571 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.047554636 +0000 UTC m=+143.923571793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.652261 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.652793 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.152775355 +0000 UTC m=+144.028792502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.753653 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.753998 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.253981778 +0000 UTC m=+144.129998925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.854409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.854667 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.354629293 +0000 UTC m=+144.230646440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.933912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" event={"ID":"31c848d8-f580-49c9-b556-6ef0ec189a51","Type":"ContainerStarted","Data":"c39aa43757135b8d8efff5c180feff2410b2d4b4c3b34e0ef98d1c43e7aba56a"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.935538 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" event={"ID":"853f490f-cc09-40aa-bde6-6f3f02e63098","Type":"ContainerStarted","Data":"b61c66dd5ca329a51dc5ffa422c8a86d03e09206eee961f6f1fd6550f78f2346"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.936325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.937183 4796 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g7r59 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.937211 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" podUID="853f490f-cc09-40aa-bde6-6f3f02e63098" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.938173 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" event={"ID":"6a87db66-197c-4eda-83d1-984d3b0957e8","Type":"ContainerStarted","Data":"0f187da8b39227b89e291529096be6ea5b269817591b5fb070c8a26cbe21af3b"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.938193 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" event={"ID":"6a87db66-197c-4eda-83d1-984d3b0957e8","Type":"ContainerStarted","Data":"e95b9eb4c6aacb101cedd123ae69f53934847819c7becaa596cc2ce8b7601c23"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.939416 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" event={"ID":"6adaae06-a6aa-4040-9a14-5490cd58b1d9","Type":"ContainerStarted","Data":"47d717eca729cfee6d5471aee80bab56099571d782bc23a56b5add2eea830d5d"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.940723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" event={"ID":"921d55d1-a229-423a-a84f-c727ecd214a4","Type":"ContainerStarted","Data":"6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.941157 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.942181 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-84sk9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.942205 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.943369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" event={"ID":"9e2dad3c-1039-41f9-9df2-633a0d146b52","Type":"ContainerStarted","Data":"0b4e49a4c1965a4a6bf66132693f3f336a6b1044259cff0430b33d822191a5ef"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.945063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" event={"ID":"aa42ca16-b6a5-4bdb-a0de-c0887b77bf61","Type":"ContainerStarted","Data":"5d1e2180db2971ffb06f84063252bc1ab8a356d144374dd2e0cb9b5788d2f1c8"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.946359 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" event={"ID":"39522b94-65f1-4c3b-a2ff-855637a38628","Type":"ContainerStarted","Data":"4b1017a58406c6d438a921daa9e2db2094c2397c62eb5868a6932508bcc238bf"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.947515 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" event={"ID":"f563f377-261e-4008-b71c-9840fe7f84a7","Type":"ContainerStarted","Data":"ddfeaf8675cea1ac7be4c4acc85a32e5c219c3cb7ca3ae829c3d7a6ab7f5c9d3"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.947537 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" event={"ID":"f563f377-261e-4008-b71c-9840fe7f84a7","Type":"ContainerStarted","Data":"194b7815e390f7562bd28fe82a7c4b13d68b89e6984527121f4f8e2baf67b27d"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.947861 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.948806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" event={"ID":"89c07828-a1a4-4261-b744-fec105f01000","Type":"ContainerStarted","Data":"9522d8e1cf1003b8725d4d741d83f5c263abe64fc64bbf6840dc02c8323eb5b7"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.950145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" event={"ID":"b1c0c8e3-e1de-4ce9-99e3-c048e499f10d","Type":"ContainerStarted","Data":"67433ffca62bcaa0ce18fae8b7db50d8ef8325aa670da48725d466e78eddf382"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.951454 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2mk57" event={"ID":"db60d6c8-a99c-4f6c-ab83-ad21555c3586","Type":"ContainerStarted","Data":"6cbc33bdf53de9a632be9b1a8542acad5c725ec1fb17a8541a9e1e9723f8f89d"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.952537 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q627j" event={"ID":"83c81be8-f61a-4bb0-b4b2-26dd509a7d9e","Type":"ContainerStarted","Data":"f70a7baa2dcf9cf544fbfe9552bdc4565b253440d3495dabdbafc6143b57a8bb"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.953721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-scqpl" event={"ID":"ebef0d4b-7f95-49b5-b6e6-2eb934d0c7f2","Type":"ContainerStarted","Data":"107697da0a4163c36ca16dc291e2c2a7155505f72a951f21919d5e7d3fd167c4"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.955479 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:52 crc kubenswrapper[4796]: E1212 04:35:52.957964 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.457945021 +0000 UTC m=+144.333962288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.960539 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" event={"ID":"3082d4f7-cc74-4257-9ed3-75c159fe22c1","Type":"ContainerStarted","Data":"1b476f5046bd17978233d2a0a3f613b5bfb22357a238a7fb1a2278cb33391458"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.961592 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" event={"ID":"0a52a2d1-d7e3-45ec-96c5-f54272b92a68","Type":"ContainerStarted","Data":"a08fc4203da100f973f64b4685d3e40f2633c60b4672b8837cf334cc7348856c"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.962771 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" event={"ID":"d03f4ca9-4aec-4319-bf66-6c96db6dbe9b","Type":"ContainerStarted","Data":"197f7eaf35b0695c1f373da5d6c4d3f8a7d5fdd700cf37260af094f00db9afc3"} Dec 12 04:35:52 crc kubenswrapper[4796]: I1212 04:35:52.964207 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" event={"ID":"fe55b40c-eb22-441f-ac88-98cf1199f515","Type":"ContainerStarted","Data":"988bba49853aa833ab7e69ef1fad32bbe6706136674fda4c54c74b18fc160880"} Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.021959 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:53 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:53 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:53 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.022001 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.022684 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c9c57" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.039174 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lj9zx" podStartSLOduration=122.039159271 podStartE2EDuration="2m2.039159271s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.015911067 +0000 UTC m=+143.891928214" watchObservedRunningTime="2025-12-12 04:35:53.039159271 +0000 UTC m=+143.915176408" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.056389 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.056841 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.556823081 +0000 UTC m=+144.432840228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.067805 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n86m7" podStartSLOduration=121.067777993 podStartE2EDuration="2m1.067777993s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.040310086 +0000 UTC m=+143.916327223" watchObservedRunningTime="2025-12-12 04:35:53.067777993 +0000 UTC m=+143.943795140" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.091093 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-scqpl" podStartSLOduration=9.091076618 podStartE2EDuration="9.091076618s" podCreationTimestamp="2025-12-12 04:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.088832058 +0000 UTC m=+143.964849205" watchObservedRunningTime="2025-12-12 04:35:53.091076618 +0000 UTC m=+143.967093765" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.091628 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6774s" podStartSLOduration=121.091622145 podStartE2EDuration="2m1.091622145s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.069965111 +0000 UTC m=+143.945982248" watchObservedRunningTime="2025-12-12 04:35:53.091622145 +0000 UTC m=+143.967639292" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.144870 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wg995" podStartSLOduration=121.144851763 podStartE2EDuration="2m1.144851763s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.144777381 +0000 UTC m=+144.020794528" watchObservedRunningTime="2025-12-12 04:35:53.144851763 +0000 UTC m=+144.020868910" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.146857 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" podStartSLOduration=121.146851595 podStartE2EDuration="2m1.146851595s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.122495967 +0000 UTC m=+143.998513114" watchObservedRunningTime="2025-12-12 04:35:53.146851595 +0000 UTC m=+144.022868742" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.158019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.159631 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.659616474 +0000 UTC m=+144.535633621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.196863 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkc7h" podStartSLOduration=121.196845343 podStartE2EDuration="2m1.196845343s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.196512183 +0000 UTC m=+144.072529330" watchObservedRunningTime="2025-12-12 04:35:53.196845343 +0000 UTC m=+144.072862490" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.258817 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8gp4h" podStartSLOduration=121.258797433 podStartE2EDuration="2m1.258797433s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.258356699 +0000 UTC m=+144.134373866" watchObservedRunningTime="2025-12-12 04:35:53.258797433 +0000 UTC m=+144.134814590" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.259258 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.759239227 +0000 UTC m=+144.635256374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.259186 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.259602 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.259856 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.759849216 +0000 UTC m=+144.635866363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.334712 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" podStartSLOduration=121.334692387 podStartE2EDuration="2m1.334692387s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.332703775 +0000 UTC m=+144.208720922" watchObservedRunningTime="2025-12-12 04:35:53.334692387 +0000 UTC m=+144.210709524" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.360791 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.361716 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.861695379 +0000 UTC m=+144.737712526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.397501 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7j4c" podStartSLOduration=122.397486274 podStartE2EDuration="2m2.397486274s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.367836989 +0000 UTC m=+144.243854146" watchObservedRunningTime="2025-12-12 04:35:53.397486274 +0000 UTC m=+144.273503421" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.398537 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" podStartSLOduration=122.398531405 podStartE2EDuration="2m2.398531405s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.396957347 +0000 UTC m=+144.272974504" watchObservedRunningTime="2025-12-12 04:35:53.398531405 +0000 UTC m=+144.274548542" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.462478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.462805 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:53.962794287 +0000 UTC m=+144.838811434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.473170 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-q627j" podStartSLOduration=122.473151351 podStartE2EDuration="2m2.473151351s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.433679671 +0000 UTC m=+144.309696818" watchObservedRunningTime="2025-12-12 04:35:53.473151351 +0000 UTC m=+144.349168488" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.535772 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ptlwf" podStartSLOduration=121.53575609 podStartE2EDuration="2m1.53575609s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.472570302 +0000 UTC m=+144.348587439" watchObservedRunningTime="2025-12-12 04:35:53.53575609 +0000 UTC m=+144.411773237" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.535930 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" podStartSLOduration=121.535925436 podStartE2EDuration="2m1.535925436s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.535592465 +0000 UTC m=+144.411609622" watchObservedRunningTime="2025-12-12 04:35:53.535925436 +0000 UTC m=+144.411942583" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.565092 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.565501 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.065483887 +0000 UTC m=+144.941501034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.568787 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t7rhc" podStartSLOduration=122.56877298 podStartE2EDuration="2m2.56877298s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:53.567264082 +0000 UTC m=+144.443281229" watchObservedRunningTime="2025-12-12 04:35:53.56877298 +0000 UTC m=+144.444790127" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.667954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.668252 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.168241528 +0000 UTC m=+145.044258675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.769754 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.769896 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.269872384 +0000 UTC m=+145.145889531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.770382 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.770660 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.270645627 +0000 UTC m=+145.146662774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.871098 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.871310 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.371270853 +0000 UTC m=+145.247288000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.871405 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.871690 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.371680405 +0000 UTC m=+145.247697552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.964903 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7flkv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.964965 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" podUID="55716cbf-9383-4bb0-806a-09ae626c5e9f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.970288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2mk57" event={"ID":"db60d6c8-a99c-4f6c-ab83-ad21555c3586","Type":"ContainerStarted","Data":"25dcd85626a05cc32437d29e03fb806ef56230b56dd91bb4964f74f6bd1b89c6"} Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.971016 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2mk57" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.971962 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.972151 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.472134144 +0000 UTC m=+145.348151291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.972403 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.972764 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" event={"ID":"39522b94-65f1-4c3b-a2ff-855637a38628","Type":"ContainerStarted","Data":"6c785de7315873cc0d6578c927171846b83cc2f8130cb29b2950637e3310a3a6"} Dec 12 04:35:53 crc kubenswrapper[4796]: E1212 04:35:53.972823 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.472800155 +0000 UTC m=+145.348817402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.973830 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-84sk9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.973868 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 04:35:53 crc kubenswrapper[4796]: I1212 04:35:53.979599 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g7r59" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.005926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2mk57" podStartSLOduration=10.005907556 podStartE2EDuration="10.005907556s" podCreationTimestamp="2025-12-12 04:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:54.004233264 +0000 UTC m=+144.880250411" watchObservedRunningTime="2025-12-12 04:35:54.005907556 +0000 UTC m=+144.881924703" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.011879 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:54 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:54 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:54 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.011924 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.057346 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xwh8h" podStartSLOduration=122.057332198 podStartE2EDuration="2m2.057332198s" podCreationTimestamp="2025-12-12 04:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:54.05576999 +0000 UTC m=+144.931787137" watchObservedRunningTime="2025-12-12 04:35:54.057332198 +0000 UTC m=+144.933349345" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.073248 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.073432 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.573404699 +0000 UTC m=+145.449421846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.074167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.077378 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.577364753 +0000 UTC m=+145.453381900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.139178 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.139741 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.146161 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.147453 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.175758 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.175920 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.675895002 +0000 UTC m=+145.551912149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.175968 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2b3c87-647e-4b63-8416-79ac60885c61-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.176017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e2b3c87-647e-4b63-8416-79ac60885c61-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.176186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.176540 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.676525901 +0000 UTC m=+145.552543048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.179724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.278171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.278631 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2b3c87-647e-4b63-8416-79ac60885c61-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.278664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e2b3c87-647e-4b63-8416-79ac60885c61-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.278771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e2b3c87-647e-4b63-8416-79ac60885c61-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.278842 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.778828438 +0000 UTC m=+145.654845585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.325727 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2b3c87-647e-4b63-8416-79ac60885c61-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.382731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.383108 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.883096347 +0000 UTC m=+145.759113494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.467000 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.486862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.487386 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:54.987364084 +0000 UTC m=+145.863381241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.588496 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.588800 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.088788814 +0000 UTC m=+145.964805961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.690182 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.690402 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.190372819 +0000 UTC m=+146.066389966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.690485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.690779 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.190766261 +0000 UTC m=+146.066783408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.791753 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.792372 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.292356386 +0000 UTC m=+146.168373533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.893240 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.893567 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.393554618 +0000 UTC m=+146.269571765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.934301 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9sbmb"] Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.935216 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.936631 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.966799 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9sbmb"] Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.978126 4796 generic.go:334] "Generic (PLEG): container finished" podID="6adaae06-a6aa-4040-9a14-5490cd58b1d9" containerID="47d717eca729cfee6d5471aee80bab56099571d782bc23a56b5add2eea830d5d" exitCode=0 Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.978184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" event={"ID":"6adaae06-a6aa-4040-9a14-5490cd58b1d9","Type":"ContainerDied","Data":"47d717eca729cfee6d5471aee80bab56099571d782bc23a56b5add2eea830d5d"} Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.979879 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" event={"ID":"fe55b40c-eb22-441f-ac88-98cf1199f515","Type":"ContainerStarted","Data":"83ba90f981270f989d0dfbe77f2413ce585a64e3b94139d71b6823c4e5a7e74d"} Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.980396 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-84sk9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.980429 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.993834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.993980 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.493960266 +0000 UTC m=+146.369977413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.994087 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-catalog-content\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.994132 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfk7f\" (UniqueName: \"kubernetes.io/projected/691c960a-4615-4c81-adba-c840acf2a99e-kube-api-access-lfk7f\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.994171 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-utilities\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:54 crc kubenswrapper[4796]: I1212 04:35:54.996185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:54 crc kubenswrapper[4796]: E1212 04:35:54.996495 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.496483795 +0000 UTC m=+146.372500942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.009881 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:55 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:55 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:55 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.009929 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.024004 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-djxgm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.097129 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.097297 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.597261374 +0000 UTC m=+146.473278521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.097421 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.097455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-catalog-content\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.097484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfk7f\" (UniqueName: \"kubernetes.io/projected/691c960a-4615-4c81-adba-c840acf2a99e-kube-api-access-lfk7f\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.097511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-utilities\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.097754 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.597739148 +0000 UTC m=+146.473756295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.098132 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-utilities\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.098181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-catalog-content\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.118496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfk7f\" (UniqueName: \"kubernetes.io/projected/691c960a-4615-4c81-adba-c840acf2a99e-kube-api-access-lfk7f\") pod \"certified-operators-9sbmb\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.123579 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pglbm"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.124442 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.131569 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.199778 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.199928 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn5hj\" (UniqueName: \"kubernetes.io/projected/a365526d-630d-4ebf-8d4f-98c944e6eee3-kube-api-access-fn5hj\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.199993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-utilities\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.200026 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-catalog-content\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.200147 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.700133638 +0000 UTC m=+146.576150785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.200980 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.220551 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pglbm"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.255531 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.296447 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hwntv"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.297352 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.301349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-utilities\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.301390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.301420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-catalog-content\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.301486 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn5hj\" (UniqueName: \"kubernetes.io/projected/a365526d-630d-4ebf-8d4f-98c944e6eee3-kube-api-access-fn5hj\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.301816 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-utilities\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.301959 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.801946749 +0000 UTC m=+146.677963896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.302089 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-catalog-content\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.313586 4796 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.322193 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwntv"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.342729 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn5hj\" (UniqueName: \"kubernetes.io/projected/a365526d-630d-4ebf-8d4f-98c944e6eee3-kube-api-access-fn5hj\") pod \"community-operators-pglbm\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.404810 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.404991 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmg6\" (UniqueName: \"kubernetes.io/projected/abee6136-ab46-48f3-987d-e9d070b4ee80-kube-api-access-jvmg6\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.405026 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.405054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-utilities\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.405080 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-catalog-content\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.405100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.405144 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.405173 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.405527 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:55.905498936 +0000 UTC m=+146.781516143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.410291 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.410904 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.417812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.418951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.432624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.434188 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.434218 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.439123 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.439575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.447353 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.488678 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.489909 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.507625 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4zc4"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.509204 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.514140 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmg6\" (UniqueName: \"kubernetes.io/projected/abee6136-ab46-48f3-987d-e9d070b4ee80-kube-api-access-jvmg6\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.514236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-utilities\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.514268 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-catalog-content\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.514369 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.516835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-utilities\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.517300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-catalog-content\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.517731 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:56.017711371 +0000 UTC m=+146.893728578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.524609 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.534606 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.551222 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.551338 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.554541 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4zc4"] Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.557216 4796 patch_prober.go:28] interesting pod/console-f9d7485db-4tvxf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.557723 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4tvxf" podUID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.566942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmg6\" (UniqueName: \"kubernetes.io/projected/abee6136-ab46-48f3-987d-e9d070b4ee80-kube-api-access-jvmg6\") pod \"certified-operators-hwntv\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.615934 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.616180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-utilities\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.616954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cqqr\" (UniqueName: \"kubernetes.io/projected/ad7c0a82-c54e-4675-941d-8fecd137719b-kube-api-access-8cqqr\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.617016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-catalog-content\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.617122 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-12 04:35:56.117106078 +0000 UTC m=+146.993123225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.618599 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.662543 4796 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T04:35:55.313603423Z","Handler":null,"Name":""} Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.722990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-catalog-content\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.723206 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.723243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-utilities\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.723319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cqqr\" (UniqueName: \"kubernetes.io/projected/ad7c0a82-c54e-4675-941d-8fecd137719b-kube-api-access-8cqqr\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: E1212 04:35:55.723583 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-12 04:35:56.223571734 +0000 UTC m=+147.099588881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mwr2j" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.723582 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-catalog-content\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.724391 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-utilities\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.732714 4796 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.732741 4796 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.757809 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cqqr\" (UniqueName: \"kubernetes.io/projected/ad7c0a82-c54e-4675-941d-8fecd137719b-kube-api-access-8cqqr\") pod \"community-operators-s4zc4\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.824082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.884095 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.932666 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.993821 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-l56xp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.994122 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l56xp" podUID="8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.993821 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-l56xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 12 04:35:55 crc kubenswrapper[4796]: I1212 04:35:55.994310 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l56xp" podUID="8fdf4752-ecd8-4f76-8a6d-6ca3ca3cbf6e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.019576 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:56 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:56 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:56 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.019618 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.027308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.039926 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.039965 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.043066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" event={"ID":"fe55b40c-eb22-441f-ac88-98cf1199f515","Type":"ContainerStarted","Data":"e62e7cea7b54d156f885baaf3f4f09082dd9d0c5a256ad74fcace8f4ef3aa968"} Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.043110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" event={"ID":"fe55b40c-eb22-441f-ac88-98cf1199f515","Type":"ContainerStarted","Data":"ebbb8581a8ca3713fe284d28810293c1cc4dd6c838104a7b2308bee885b5e96e"} Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.059156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e2b3c87-647e-4b63-8416-79ac60885c61","Type":"ContainerStarted","Data":"505b332de457a66247be28dd27cf6ab5625745f78c4d198692274456a8da1373"} Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.059183 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e2b3c87-647e-4b63-8416-79ac60885c61","Type":"ContainerStarted","Data":"15c9d170f6edccebf5f409c670a87e956a3c048ca5b86660724489be4e859e50"} Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.072512 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9d4m" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.077256 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-zqsxz" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.126503 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jfpqc" podStartSLOduration=13.126482625 podStartE2EDuration="13.126482625s" podCreationTimestamp="2025-12-12 04:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:56.111666464 +0000 UTC m=+146.987683631" watchObservedRunningTime="2025-12-12 04:35:56.126482625 +0000 UTC m=+147.002499772" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.137310 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.137292372 podStartE2EDuration="2.137292372s" podCreationTimestamp="2025-12-12 04:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:56.136822257 +0000 UTC m=+147.012839404" watchObservedRunningTime="2025-12-12 04:35:56.137292372 +0000 UTC m=+147.013309519" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.142921 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.180306 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9sbmb"] Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.181845 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mwr2j\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:56 crc kubenswrapper[4796]: W1212 04:35:56.241338 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3663bead08a050bb78ed63822b9a3ad4b6165d3e3be7d30f6b68f827a400a29c WatchSource:0}: Error finding container 3663bead08a050bb78ed63822b9a3ad4b6165d3e3be7d30f6b68f827a400a29c: Status 404 returned error can't find the container with id 3663bead08a050bb78ed63822b9a3ad4b6165d3e3be7d30f6b68f827a400a29c Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.296754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.533566 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pglbm"] Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.715935 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4zc4"] Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.954568 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zxx8t"] Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.962762 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.971809 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 04:35:56 crc kubenswrapper[4796]: I1212 04:35:56.983920 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxx8t"] Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.005097 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.013341 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:57 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:57 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:57 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.013383 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.047804 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.065910 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-utilities\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.065971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpm4m\" (UniqueName: \"kubernetes.io/projected/c30f7710-de01-48bc-8245-ef5a7a048f2e-kube-api-access-xpm4m\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.066106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-catalog-content\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.085494 4796 generic.go:334] "Generic (PLEG): container finished" podID="691c960a-4615-4c81-adba-c840acf2a99e" containerID="a77dab67d222739362c775f10e135d6fa482ceb8ac37306fad9d02fc132ffd75" exitCode=0 Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.085574 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerDied","Data":"a77dab67d222739362c775f10e135d6fa482ceb8ac37306fad9d02fc132ffd75"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.085602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerStarted","Data":"21a47f7146d6d50c4b593b6143a1812f19922028241e890be2441de1f9f299d0"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.089487 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.100437 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a56bb9a42dcbc604fdcdd701d0f2051fafd533f06cfa058adbb675669b0c9269"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.101304 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwntv"] Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.119734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" event={"ID":"6adaae06-a6aa-4040-9a14-5490cd58b1d9","Type":"ContainerDied","Data":"6758a06670cf4724b8cec0ab13ec7c6c6dcb245fe1094b1a77c08889b51450d8"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.119768 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6758a06670cf4724b8cec0ab13ec7c6c6dcb245fe1094b1a77c08889b51450d8" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.129684 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.130985 4796 generic.go:334] "Generic (PLEG): container finished" podID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerID="c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf" exitCode=0 Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.131459 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pglbm" event={"ID":"a365526d-630d-4ebf-8d4f-98c944e6eee3","Type":"ContainerDied","Data":"c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.131486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pglbm" event={"ID":"a365526d-630d-4ebf-8d4f-98c944e6eee3","Type":"ContainerStarted","Data":"5f0e0151bf16991408fdde6105a296757ff6771a46744bcc8ace8b3e11f50326"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.132778 4796 generic.go:334] "Generic (PLEG): container finished" podID="7e2b3c87-647e-4b63-8416-79ac60885c61" containerID="505b332de457a66247be28dd27cf6ab5625745f78c4d198692274456a8da1373" exitCode=0 Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.132830 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e2b3c87-647e-4b63-8416-79ac60885c61","Type":"ContainerDied","Data":"505b332de457a66247be28dd27cf6ab5625745f78c4d198692274456a8da1373"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.133576 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerStarted","Data":"3ba51b2d8ea08b6afd1c08f03f510dd49e6e5dc67b2d93a830b713b3108dcba1"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.171597 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-catalog-content\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.171670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-utilities\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.171690 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpm4m\" (UniqueName: \"kubernetes.io/projected/c30f7710-de01-48bc-8245-ef5a7a048f2e-kube-api-access-xpm4m\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.172358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-catalog-content\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.172568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-utilities\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.180532 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dee449e6ef97c1d1e3fa6cb71d71c062c9110ea71dd2a173904cb31aabde7163"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.180573 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3663bead08a050bb78ed63822b9a3ad4b6165d3e3be7d30f6b68f827a400a29c"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.188358 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwr2j"] Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.211744 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6e9215ece6a3e5c98ee1cb53717d8face52158c522574832a05c991967b9a238"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.211776 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5d69ed2e36f64e8e6959ba373f30fbfeae5eee5fcf2e9dc1e8c0ca6733df2406"} Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.212042 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.231313 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpm4m\" (UniqueName: \"kubernetes.io/projected/c30f7710-de01-48bc-8245-ef5a7a048f2e-kube-api-access-xpm4m\") pod \"redhat-marketplace-zxx8t\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.272172 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5nx6\" (UniqueName: \"kubernetes.io/projected/6adaae06-a6aa-4040-9a14-5490cd58b1d9-kube-api-access-j5nx6\") pod \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.272514 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adaae06-a6aa-4040-9a14-5490cd58b1d9-config-volume\") pod \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.272537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6adaae06-a6aa-4040-9a14-5490cd58b1d9-secret-volume\") pod \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\" (UID: \"6adaae06-a6aa-4040-9a14-5490cd58b1d9\") " Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.273734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adaae06-a6aa-4040-9a14-5490cd58b1d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "6adaae06-a6aa-4040-9a14-5490cd58b1d9" (UID: "6adaae06-a6aa-4040-9a14-5490cd58b1d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.278409 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adaae06-a6aa-4040-9a14-5490cd58b1d9-kube-api-access-j5nx6" (OuterVolumeSpecName: "kube-api-access-j5nx6") pod "6adaae06-a6aa-4040-9a14-5490cd58b1d9" (UID: "6adaae06-a6aa-4040-9a14-5490cd58b1d9"). InnerVolumeSpecName "kube-api-access-j5nx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.293924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adaae06-a6aa-4040-9a14-5490cd58b1d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6adaae06-a6aa-4040-9a14-5490cd58b1d9" (UID: "6adaae06-a6aa-4040-9a14-5490cd58b1d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.318735 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d5pmg"] Dec 12 04:35:57 crc kubenswrapper[4796]: E1212 04:35:57.318936 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adaae06-a6aa-4040-9a14-5490cd58b1d9" containerName="collect-profiles" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.318947 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adaae06-a6aa-4040-9a14-5490cd58b1d9" containerName="collect-profiles" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.319042 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adaae06-a6aa-4040-9a14-5490cd58b1d9" containerName="collect-profiles" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.319652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.335935 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5pmg"] Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.351110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.374229 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5nx6\" (UniqueName: \"kubernetes.io/projected/6adaae06-a6aa-4040-9a14-5490cd58b1d9-kube-api-access-j5nx6\") on node \"crc\" DevicePath \"\"" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.374252 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6adaae06-a6aa-4040-9a14-5490cd58b1d9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.374261 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6adaae06-a6aa-4040-9a14-5490cd58b1d9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.430335 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.431069 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7flkv" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.477768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2wl\" (UniqueName: \"kubernetes.io/projected/4d516edf-2b45-43a4-b506-b3b0e8f10a26-kube-api-access-jc2wl\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.477811 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-utilities\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.477837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-catalog-content\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.579231 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc2wl\" (UniqueName: \"kubernetes.io/projected/4d516edf-2b45-43a4-b506-b3b0e8f10a26-kube-api-access-jc2wl\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.579586 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-utilities\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.579611 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-catalog-content\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.580920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-utilities\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.581125 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-catalog-content\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.612640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc2wl\" (UniqueName: \"kubernetes.io/projected/4d516edf-2b45-43a4-b506-b3b0e8f10a26-kube-api-access-jc2wl\") pod \"redhat-marketplace-d5pmg\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.665915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.747020 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxx8t"] Dec 12 04:35:57 crc kubenswrapper[4796]: I1212 04:35:57.990522 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5pmg"] Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.011468 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:58 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:58 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:58 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.011511 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:58 crc kubenswrapper[4796]: W1212 04:35:58.027842 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d516edf_2b45_43a4_b506_b3b0e8f10a26.slice/crio-4002764d0fd14a1ada833ca7a0c3b1a489bb99c3a4309d885108fc176f5284d9 WatchSource:0}: Error finding container 4002764d0fd14a1ada833ca7a0c3b1a489bb99c3a4309d885108fc176f5284d9: Status 404 returned error can't find the container with id 4002764d0fd14a1ada833ca7a0c3b1a489bb99c3a4309d885108fc176f5284d9 Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.223311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b5a233ecf904782c6ad7803a1f23f4478da6214c4ee603240f3ff3c785cd5a98"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.229641 4796 generic.go:334] "Generic (PLEG): container finished" podID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerID="bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3" exitCode=0 Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.229707 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwntv" event={"ID":"abee6136-ab46-48f3-987d-e9d070b4ee80","Type":"ContainerDied","Data":"bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.229730 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwntv" event={"ID":"abee6136-ab46-48f3-987d-e9d070b4ee80","Type":"ContainerStarted","Data":"7f9dbc3ce44e49b6d74422c1f79bedee1cb360d654d5535c9a5a1e7dff55cf38"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.231111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" event={"ID":"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3","Type":"ContainerStarted","Data":"e7c878f00842cd45a5cb23e7f3ffa8ae23e39e90ff63916de802b3a2d17cd10b"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.231135 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" event={"ID":"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3","Type":"ContainerStarted","Data":"00bc8bd33c61736a5c5af8bd0389caa33ee55620770cfd68227e7478af173a98"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.231237 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.235483 4796 generic.go:334] "Generic (PLEG): container finished" podID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerID="ec32d64ff628d2897ab972d1c33e4422c06e7ad62a2a8580e2816a41e52d1302" exitCode=0 Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.235544 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerDied","Data":"ec32d64ff628d2897ab972d1c33e4422c06e7ad62a2a8580e2816a41e52d1302"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.258337 4796 generic.go:334] "Generic (PLEG): container finished" podID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerID="2fb015eeb0e0291ef576c54b561a0db0080470ac529e67f106b39537f76e1f9b" exitCode=0 Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.258403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerDied","Data":"2fb015eeb0e0291ef576c54b561a0db0080470ac529e67f106b39537f76e1f9b"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.258431 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerStarted","Data":"6ed57ae92aa7b2f7561f3ca9e89889edb70e56dcbefc067dfe8d885961361299"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.264346 4796 generic.go:334] "Generic (PLEG): container finished" podID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerID="e8583145232595ecec3c34df655cac5d53358ea1d90f3d9e9600025129b4d058" exitCode=0 Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.265301 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerDied","Data":"e8583145232595ecec3c34df655cac5d53358ea1d90f3d9e9600025129b4d058"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.265324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerStarted","Data":"4002764d0fd14a1ada833ca7a0c3b1a489bb99c3a4309d885108fc176f5284d9"} Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.265470 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.304739 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" podStartSLOduration=127.304720031 podStartE2EDuration="2m7.304720031s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:35:58.27773138 +0000 UTC m=+149.153748517" watchObservedRunningTime="2025-12-12 04:35:58.304720031 +0000 UTC m=+149.180737178" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.322314 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rsk6v"] Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.323298 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.324036 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsk6v"] Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.328833 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.507138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdb92\" (UniqueName: \"kubernetes.io/projected/3f1b12ad-66a0-46a1-a64e-54bd3a294549-kube-api-access-tdb92\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.507212 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-catalog-content\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.507237 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-utilities\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.608516 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdb92\" (UniqueName: \"kubernetes.io/projected/3f1b12ad-66a0-46a1-a64e-54bd3a294549-kube-api-access-tdb92\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.608600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-catalog-content\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.608643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-utilities\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.613983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-catalog-content\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.614079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-utilities\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.642107 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdb92\" (UniqueName: \"kubernetes.io/projected/3f1b12ad-66a0-46a1-a64e-54bd3a294549-kube-api-access-tdb92\") pod \"redhat-operators-rsk6v\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.674258 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.712056 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.713935 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8nbl2"] Dec 12 04:35:58 crc kubenswrapper[4796]: E1212 04:35:58.714237 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2b3c87-647e-4b63-8416-79ac60885c61" containerName="pruner" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.714257 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2b3c87-647e-4b63-8416-79ac60885c61" containerName="pruner" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.714437 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2b3c87-647e-4b63-8416-79ac60885c61" containerName="pruner" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.717720 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.720976 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nbl2"] Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.811560 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e2b3c87-647e-4b63-8416-79ac60885c61-kubelet-dir\") pod \"7e2b3c87-647e-4b63-8416-79ac60885c61\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.811621 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2b3c87-647e-4b63-8416-79ac60885c61-kube-api-access\") pod \"7e2b3c87-647e-4b63-8416-79ac60885c61\" (UID: \"7e2b3c87-647e-4b63-8416-79ac60885c61\") " Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.811866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-utilities\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.811909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-catalog-content\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.811931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvtl\" (UniqueName: \"kubernetes.io/projected/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-kube-api-access-hzvtl\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.812047 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e2b3c87-647e-4b63-8416-79ac60885c61-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e2b3c87-647e-4b63-8416-79ac60885c61" (UID: "7e2b3c87-647e-4b63-8416-79ac60885c61"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.829134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2b3c87-647e-4b63-8416-79ac60885c61-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e2b3c87-647e-4b63-8416-79ac60885c61" (UID: "7e2b3c87-647e-4b63-8416-79ac60885c61"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.912702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-utilities\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.912771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-catalog-content\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.912804 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvtl\" (UniqueName: \"kubernetes.io/projected/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-kube-api-access-hzvtl\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.912878 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e2b3c87-647e-4b63-8416-79ac60885c61-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.912892 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2b3c87-647e-4b63-8416-79ac60885c61-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.913147 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-utilities\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.913386 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-catalog-content\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:58 crc kubenswrapper[4796]: I1212 04:35:58.933639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvtl\" (UniqueName: \"kubernetes.io/projected/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-kube-api-access-hzvtl\") pod \"redhat-operators-8nbl2\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.007752 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:35:59 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 12 04:35:59 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:35:59 crc kubenswrapper[4796]: healthz check failed Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.008102 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.087274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.102138 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsk6v"] Dec 12 04:35:59 crc kubenswrapper[4796]: W1212 04:35:59.129710 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1b12ad_66a0_46a1_a64e_54bd3a294549.slice/crio-6520c12ea99b3a204deffa3816b6ce2bd5a3b4a3ede46c6ae09ec52c23a16772 WatchSource:0}: Error finding container 6520c12ea99b3a204deffa3816b6ce2bd5a3b4a3ede46c6ae09ec52c23a16772: Status 404 returned error can't find the container with id 6520c12ea99b3a204deffa3816b6ce2bd5a3b4a3ede46c6ae09ec52c23a16772 Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.288050 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.288172 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7e2b3c87-647e-4b63-8416-79ac60885c61","Type":"ContainerDied","Data":"15c9d170f6edccebf5f409c670a87e956a3c048ca5b86660724489be4e859e50"} Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.288226 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c9d170f6edccebf5f409c670a87e956a3c048ca5b86660724489be4e859e50" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.292642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerStarted","Data":"6520c12ea99b3a204deffa3816b6ce2bd5a3b4a3ede46c6ae09ec52c23a16772"} Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.582140 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nbl2"] Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.773608 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.774477 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.780068 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.780512 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.782472 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.837410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.837498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.939102 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.939210 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.939329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:35:59 crc kubenswrapper[4796]: I1212 04:35:59.969011 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.009929 4796 patch_prober.go:28] interesting pod/router-default-5444994796-px76m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 04:36:00 crc kubenswrapper[4796]: [+]has-synced ok Dec 12 04:36:00 crc kubenswrapper[4796]: [+]process-running ok Dec 12 04:36:00 crc kubenswrapper[4796]: healthz check failed Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.009994 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-px76m" podUID="035ca985-1000-4c28-aece-3c46abf07371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.118039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.316476 4796 generic.go:334] "Generic (PLEG): container finished" podID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerID="3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8" exitCode=0 Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.317524 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerDied","Data":"3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8"} Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.317547 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerStarted","Data":"77287c42b25b928268957b05c4587da1ba04301cf145964c8a8a1b5185cf1860"} Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.327798 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerID="349710e3c0fe44c86a19f2212c3e0e06b294904d7ee341f20fe9eb480c19f0c5" exitCode=0 Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.327831 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerDied","Data":"349710e3c0fe44c86a19f2212c3e0e06b294904d7ee341f20fe9eb480c19f0c5"} Dec 12 04:36:00 crc kubenswrapper[4796]: I1212 04:36:00.664461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 12 04:36:00 crc kubenswrapper[4796]: W1212 04:36:00.721275 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf43ae682_c3aa_4806_91d0_fef5a6ca1c98.slice/crio-f8549a4feecc0caac21ed014255c9bad418e6e2f2dd1b439b819be03b69db8a4 WatchSource:0}: Error finding container f8549a4feecc0caac21ed014255c9bad418e6e2f2dd1b439b819be03b69db8a4: Status 404 returned error can't find the container with id f8549a4feecc0caac21ed014255c9bad418e6e2f2dd1b439b819be03b69db8a4 Dec 12 04:36:01 crc kubenswrapper[4796]: I1212 04:36:01.013955 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:36:01 crc kubenswrapper[4796]: I1212 04:36:01.021173 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-px76m" Dec 12 04:36:01 crc kubenswrapper[4796]: I1212 04:36:01.364213 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f43ae682-c3aa-4806-91d0-fef5a6ca1c98","Type":"ContainerStarted","Data":"f8549a4feecc0caac21ed014255c9bad418e6e2f2dd1b439b819be03b69db8a4"} Dec 12 04:36:02 crc kubenswrapper[4796]: I1212 04:36:02.208626 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2mk57" Dec 12 04:36:02 crc kubenswrapper[4796]: I1212 04:36:02.395738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f43ae682-c3aa-4806-91d0-fef5a6ca1c98","Type":"ContainerStarted","Data":"5007420a3f1297987a1887ae2bb2bad8f33f17d4095a71831b37a1cfa45986e1"} Dec 12 04:36:02 crc kubenswrapper[4796]: I1212 04:36:02.969485 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:36:02 crc kubenswrapper[4796]: I1212 04:36:02.969804 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:36:03 crc kubenswrapper[4796]: I1212 04:36:03.455793 4796 generic.go:334] "Generic (PLEG): container finished" podID="f43ae682-c3aa-4806-91d0-fef5a6ca1c98" containerID="5007420a3f1297987a1887ae2bb2bad8f33f17d4095a71831b37a1cfa45986e1" exitCode=0 Dec 12 04:36:03 crc kubenswrapper[4796]: I1212 04:36:03.455959 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f43ae682-c3aa-4806-91d0-fef5a6ca1c98","Type":"ContainerDied","Data":"5007420a3f1297987a1887ae2bb2bad8f33f17d4095a71831b37a1cfa45986e1"} Dec 12 04:36:05 crc kubenswrapper[4796]: I1212 04:36:05.553637 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:36:05 crc kubenswrapper[4796]: I1212 04:36:05.556890 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:36:05 crc kubenswrapper[4796]: I1212 04:36:05.997988 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l56xp" Dec 12 04:36:14 crc kubenswrapper[4796]: I1212 04:36:14.215302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:36:14 crc kubenswrapper[4796]: I1212 04:36:14.223740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a81191a1-393c-400c-9b7d-6748c4a8fb36-metrics-certs\") pod \"network-metrics-daemon-ftpgk\" (UID: \"a81191a1-393c-400c-9b7d-6748c4a8fb36\") " pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:36:14 crc kubenswrapper[4796]: I1212 04:36:14.444861 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ftpgk" Dec 12 04:36:16 crc kubenswrapper[4796]: I1212 04:36:16.302630 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:36:21 crc kubenswrapper[4796]: I1212 04:36:21.998193 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.138949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kubelet-dir\") pod \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.139001 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kube-api-access\") pod \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\" (UID: \"f43ae682-c3aa-4806-91d0-fef5a6ca1c98\") " Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.138999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f43ae682-c3aa-4806-91d0-fef5a6ca1c98" (UID: "f43ae682-c3aa-4806-91d0-fef5a6ca1c98"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.139247 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.144350 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f43ae682-c3aa-4806-91d0-fef5a6ca1c98" (UID: "f43ae682-c3aa-4806-91d0-fef5a6ca1c98"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.240214 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f43ae682-c3aa-4806-91d0-fef5a6ca1c98-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.625679 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f43ae682-c3aa-4806-91d0-fef5a6ca1c98","Type":"ContainerDied","Data":"f8549a4feecc0caac21ed014255c9bad418e6e2f2dd1b439b819be03b69db8a4"} Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.625715 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8549a4feecc0caac21ed014255c9bad418e6e2f2dd1b439b819be03b69db8a4" Dec 12 04:36:22 crc kubenswrapper[4796]: I1212 04:36:22.625792 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 12 04:36:27 crc kubenswrapper[4796]: I1212 04:36:27.088913 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xwwn6" Dec 12 04:36:32 crc kubenswrapper[4796]: E1212 04:36:32.043966 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 12 04:36:32 crc kubenswrapper[4796]: E1212 04:36:32.044767 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn5hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pglbm_openshift-marketplace(a365526d-630d-4ebf-8d4f-98c944e6eee3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 04:36:32 crc kubenswrapper[4796]: E1212 04:36:32.046213 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pglbm" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" Dec 12 04:36:32 crc kubenswrapper[4796]: I1212 04:36:32.969969 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:36:32 crc kubenswrapper[4796]: I1212 04:36:32.970049 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.165023 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 04:36:34 crc kubenswrapper[4796]: E1212 04:36:34.165429 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ae682-c3aa-4806-91d0-fef5a6ca1c98" containerName="pruner" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.165446 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ae682-c3aa-4806-91d0-fef5a6ca1c98" containerName="pruner" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.165692 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ae682-c3aa-4806-91d0-fef5a6ca1c98" containerName="pruner" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.167387 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.170240 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.170438 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.195873 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.288821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a6fe68-5faf-4340-b49b-98283df8aa38-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.289102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a6fe68-5faf-4340-b49b-98283df8aa38-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.389908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a6fe68-5faf-4340-b49b-98283df8aa38-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.390187 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a6fe68-5faf-4340-b49b-98283df8aa38-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.390317 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a6fe68-5faf-4340-b49b-98283df8aa38-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.407259 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a6fe68-5faf-4340-b49b-98283df8aa38-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: I1212 04:36:34.499353 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:34 crc kubenswrapper[4796]: E1212 04:36:34.983243 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pglbm" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.139087 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4tb6c"] Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.219706 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.219835 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8cqqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s4zc4_openshift-marketplace(ad7c0a82-c54e-4675-941d-8fecd137719b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.221087 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s4zc4" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.276007 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.276419 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfk7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9sbmb_openshift-marketplace(691c960a-4615-4c81-adba-c840acf2a99e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.288608 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9sbmb" podUID="691c960a-4615-4c81-adba-c840acf2a99e" Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.481998 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.513953 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.514164 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvmg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hwntv_openshift-marketplace(abee6136-ab46-48f3-987d-e9d070b4ee80): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.519510 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hwntv" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.704991 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerStarted","Data":"6480c93ba0ac39ddd114eb77a24d50347f49853d5e95fcc2068fbc2fadd2110e"} Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.712271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerStarted","Data":"80d97c13b7a78fced96305e37a69f0f1edd1e150a68739415f006df2b3cf7850"} Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.716269 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerStarted","Data":"16adfbde581d06d9c70a1d365ec2ba72100abcb5341c86d83d863960ff0cd1ac"} Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.716885 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s4zc4" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.717781 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hwntv" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" Dec 12 04:36:35 crc kubenswrapper[4796]: E1212 04:36:35.717894 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9sbmb" podUID="691c960a-4615-4c81-adba-c840acf2a99e" Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.844386 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 12 04:36:35 crc kubenswrapper[4796]: I1212 04:36:35.992739 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ftpgk"] Dec 12 04:36:36 crc kubenswrapper[4796]: W1212 04:36:36.000078 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda81191a1_393c_400c_9b7d_6748c4a8fb36.slice/crio-166c8d376a69a8bcf079383810e393cfd53fedd863f9aeb5a40081723071acad WatchSource:0}: Error finding container 166c8d376a69a8bcf079383810e393cfd53fedd863f9aeb5a40081723071acad: Status 404 returned error can't find the container with id 166c8d376a69a8bcf079383810e393cfd53fedd863f9aeb5a40081723071acad Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.722824 4796 generic.go:334] "Generic (PLEG): container finished" podID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerID="16adfbde581d06d9c70a1d365ec2ba72100abcb5341c86d83d863960ff0cd1ac" exitCode=0 Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.723030 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerDied","Data":"16adfbde581d06d9c70a1d365ec2ba72100abcb5341c86d83d863960ff0cd1ac"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.727635 4796 generic.go:334] "Generic (PLEG): container finished" podID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerID="80d97c13b7a78fced96305e37a69f0f1edd1e150a68739415f006df2b3cf7850" exitCode=0 Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.727703 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerDied","Data":"80d97c13b7a78fced96305e37a69f0f1edd1e150a68739415f006df2b3cf7850"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.729681 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerID="6480c93ba0ac39ddd114eb77a24d50347f49853d5e95fcc2068fbc2fadd2110e" exitCode=0 Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.729734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerDied","Data":"6480c93ba0ac39ddd114eb77a24d50347f49853d5e95fcc2068fbc2fadd2110e"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.731792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a6fe68-5faf-4340-b49b-98283df8aa38","Type":"ContainerStarted","Data":"0e4700906539e6307cb18eda7e0f214e1c4673e8d1b1f6731ffb846f08454fa2"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.731840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a6fe68-5faf-4340-b49b-98283df8aa38","Type":"ContainerStarted","Data":"ce5d01171ab9c4bc7a5913655fde0578cb2bfdda429c729e7a9881771741f62f"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.733088 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" event={"ID":"a81191a1-393c-400c-9b7d-6748c4a8fb36","Type":"ContainerStarted","Data":"34841094d69f6be9d85b83b5294f2bad8b0dfe9fcd2fac9d5b1e6ba3a6ec9b04"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.733117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" event={"ID":"a81191a1-393c-400c-9b7d-6748c4a8fb36","Type":"ContainerStarted","Data":"166c8d376a69a8bcf079383810e393cfd53fedd863f9aeb5a40081723071acad"} Dec 12 04:36:36 crc kubenswrapper[4796]: I1212 04:36:36.734261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerStarted","Data":"81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b"} Dec 12 04:36:37 crc kubenswrapper[4796]: I1212 04:36:37.741918 4796 generic.go:334] "Generic (PLEG): container finished" podID="b1a6fe68-5faf-4340-b49b-98283df8aa38" containerID="0e4700906539e6307cb18eda7e0f214e1c4673e8d1b1f6731ffb846f08454fa2" exitCode=0 Dec 12 04:36:37 crc kubenswrapper[4796]: I1212 04:36:37.741987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a6fe68-5faf-4340-b49b-98283df8aa38","Type":"ContainerDied","Data":"0e4700906539e6307cb18eda7e0f214e1c4673e8d1b1f6731ffb846f08454fa2"} Dec 12 04:36:37 crc kubenswrapper[4796]: I1212 04:36:37.744009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ftpgk" event={"ID":"a81191a1-393c-400c-9b7d-6748c4a8fb36","Type":"ContainerStarted","Data":"af8923cbd3c65d41099f02f3df6d9ac6bd59c09bf1e9fcee87198e8d889f4e7d"} Dec 12 04:36:37 crc kubenswrapper[4796]: I1212 04:36:37.745220 4796 generic.go:334] "Generic (PLEG): container finished" podID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerID="81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b" exitCode=0 Dec 12 04:36:37 crc kubenswrapper[4796]: I1212 04:36:37.745253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerDied","Data":"81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b"} Dec 12 04:36:38 crc kubenswrapper[4796]: I1212 04:36:38.778436 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ftpgk" podStartSLOduration=167.778411206 podStartE2EDuration="2m47.778411206s" podCreationTimestamp="2025-12-12 04:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:36:38.774497633 +0000 UTC m=+189.650514780" watchObservedRunningTime="2025-12-12 04:36:38.778411206 +0000 UTC m=+189.654428353" Dec 12 04:36:38 crc kubenswrapper[4796]: I1212 04:36:38.968500 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 04:36:38 crc kubenswrapper[4796]: I1212 04:36:38.969238 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:38 crc kubenswrapper[4796]: I1212 04:36:38.979788 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.060113 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.060217 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c831a3f-3013-431c-a451-b853f8162e02-kube-api-access\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.060261 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-var-lock\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.161410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.161469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c831a3f-3013-431c-a451-b853f8162e02-kube-api-access\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.161513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-var-lock\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.161557 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.161651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-var-lock\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.178032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c831a3f-3013-431c-a451-b853f8162e02-kube-api-access\") pod \"installer-9-crc\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.301505 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.346912 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.501645 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a6fe68-5faf-4340-b49b-98283df8aa38-kubelet-dir\") pod \"b1a6fe68-5faf-4340-b49b-98283df8aa38\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.501766 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a6fe68-5faf-4340-b49b-98283df8aa38-kube-api-access\") pod \"b1a6fe68-5faf-4340-b49b-98283df8aa38\" (UID: \"b1a6fe68-5faf-4340-b49b-98283df8aa38\") " Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.502199 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1a6fe68-5faf-4340-b49b-98283df8aa38-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1a6fe68-5faf-4340-b49b-98283df8aa38" (UID: "b1a6fe68-5faf-4340-b49b-98283df8aa38"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.505330 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a6fe68-5faf-4340-b49b-98283df8aa38-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1a6fe68-5faf-4340-b49b-98283df8aa38" (UID: "b1a6fe68-5faf-4340-b49b-98283df8aa38"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.602688 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1a6fe68-5faf-4340-b49b-98283df8aa38-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.602722 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1a6fe68-5faf-4340-b49b-98283df8aa38-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.756777 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1a6fe68-5faf-4340-b49b-98283df8aa38","Type":"ContainerDied","Data":"ce5d01171ab9c4bc7a5913655fde0578cb2bfdda429c729e7a9881771741f62f"} Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.756810 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce5d01171ab9c4bc7a5913655fde0578cb2bfdda429c729e7a9881771741f62f" Dec 12 04:36:39 crc kubenswrapper[4796]: I1212 04:36:39.756922 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 12 04:36:40 crc kubenswrapper[4796]: I1212 04:36:40.566580 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 12 04:36:40 crc kubenswrapper[4796]: W1212 04:36:40.577627 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c831a3f_3013_431c_a451_b853f8162e02.slice/crio-7fc208151b0a4b3b95c11d842f11546d06a98451db405451f7e50d6a40b94bd4 WatchSource:0}: Error finding container 7fc208151b0a4b3b95c11d842f11546d06a98451db405451f7e50d6a40b94bd4: Status 404 returned error can't find the container with id 7fc208151b0a4b3b95c11d842f11546d06a98451db405451f7e50d6a40b94bd4 Dec 12 04:36:40 crc kubenswrapper[4796]: I1212 04:36:40.763137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1c831a3f-3013-431c-a451-b853f8162e02","Type":"ContainerStarted","Data":"7fc208151b0a4b3b95c11d842f11546d06a98451db405451f7e50d6a40b94bd4"} Dec 12 04:36:40 crc kubenswrapper[4796]: I1212 04:36:40.766454 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerStarted","Data":"358fe691a45bc3038e806e8e2248176dadab341b8c1b63f822ad8e8eb2e81f53"} Dec 12 04:36:41 crc kubenswrapper[4796]: I1212 04:36:41.803453 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rsk6v" podStartSLOduration=3.970511254 podStartE2EDuration="43.803437829s" podCreationTimestamp="2025-12-12 04:35:58 +0000 UTC" firstStartedPulling="2025-12-12 04:36:00.331524578 +0000 UTC m=+151.207541725" lastFinishedPulling="2025-12-12 04:36:40.164451153 +0000 UTC m=+191.040468300" observedRunningTime="2025-12-12 04:36:41.799107435 +0000 UTC m=+192.675124582" watchObservedRunningTime="2025-12-12 04:36:41.803437829 +0000 UTC m=+192.679454976" Dec 12 04:36:42 crc kubenswrapper[4796]: I1212 04:36:42.788599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerStarted","Data":"1746a89c63bd770963c73dd48256e7ae8727c88a36611537898fdeadd9b467f3"} Dec 12 04:36:43 crc kubenswrapper[4796]: I1212 04:36:43.801618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1c831a3f-3013-431c-a451-b853f8162e02","Type":"ContainerStarted","Data":"04bbd2efd0592ac7c6155fc3c0b8315f3ebcc4890c5d9b2d63eb48981da4619d"} Dec 12 04:36:43 crc kubenswrapper[4796]: I1212 04:36:43.824151 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d5pmg" podStartSLOduration=3.256698677 podStartE2EDuration="46.824133266s" podCreationTimestamp="2025-12-12 04:35:57 +0000 UTC" firstStartedPulling="2025-12-12 04:35:58.266236042 +0000 UTC m=+149.142253189" lastFinishedPulling="2025-12-12 04:36:41.833670631 +0000 UTC m=+192.709687778" observedRunningTime="2025-12-12 04:36:43.82037015 +0000 UTC m=+194.696387317" watchObservedRunningTime="2025-12-12 04:36:43.824133266 +0000 UTC m=+194.700150433" Dec 12 04:36:43 crc kubenswrapper[4796]: I1212 04:36:43.835782 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.8357611590000005 podStartE2EDuration="5.835761159s" podCreationTimestamp="2025-12-12 04:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:36:43.834373306 +0000 UTC m=+194.710390453" watchObservedRunningTime="2025-12-12 04:36:43.835761159 +0000 UTC m=+194.711778306" Dec 12 04:36:44 crc kubenswrapper[4796]: I1212 04:36:44.809646 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerStarted","Data":"50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807"} Dec 12 04:36:44 crc kubenswrapper[4796]: I1212 04:36:44.832377 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zxx8t" podStartSLOduration=3.999313809 podStartE2EDuration="48.832361314s" podCreationTimestamp="2025-12-12 04:35:56 +0000 UTC" firstStartedPulling="2025-12-12 04:35:58.260159963 +0000 UTC m=+149.136177110" lastFinishedPulling="2025-12-12 04:36:43.093207468 +0000 UTC m=+193.969224615" observedRunningTime="2025-12-12 04:36:44.830809236 +0000 UTC m=+195.706826383" watchObservedRunningTime="2025-12-12 04:36:44.832361314 +0000 UTC m=+195.708378461" Dec 12 04:36:45 crc kubenswrapper[4796]: I1212 04:36:45.814194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerStarted","Data":"73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda"} Dec 12 04:36:45 crc kubenswrapper[4796]: I1212 04:36:45.850911 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8nbl2" podStartSLOduration=3.9634519150000003 podStartE2EDuration="47.850894154s" podCreationTimestamp="2025-12-12 04:35:58 +0000 UTC" firstStartedPulling="2025-12-12 04:36:00.325672466 +0000 UTC m=+151.201689613" lastFinishedPulling="2025-12-12 04:36:44.213114705 +0000 UTC m=+195.089131852" observedRunningTime="2025-12-12 04:36:45.848393905 +0000 UTC m=+196.724411052" watchObservedRunningTime="2025-12-12 04:36:45.850894154 +0000 UTC m=+196.726911301" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.352120 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.352403 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.429126 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.667001 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.667065 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.702098 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:36:47 crc kubenswrapper[4796]: I1212 04:36:47.881022 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:36:48 crc kubenswrapper[4796]: I1212 04:36:48.675350 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:36:48 crc kubenswrapper[4796]: I1212 04:36:48.675590 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:36:48 crc kubenswrapper[4796]: I1212 04:36:48.718496 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:36:48 crc kubenswrapper[4796]: I1212 04:36:48.875623 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:36:49 crc kubenswrapper[4796]: I1212 04:36:49.088224 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:36:49 crc kubenswrapper[4796]: I1212 04:36:49.089058 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:36:49 crc kubenswrapper[4796]: I1212 04:36:49.366084 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5pmg"] Dec 12 04:36:49 crc kubenswrapper[4796]: I1212 04:36:49.843149 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d5pmg" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="registry-server" containerID="cri-o://1746a89c63bd770963c73dd48256e7ae8727c88a36611537898fdeadd9b467f3" gracePeriod=2 Dec 12 04:36:50 crc kubenswrapper[4796]: I1212 04:36:50.129293 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8nbl2" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="registry-server" probeResult="failure" output=< Dec 12 04:36:50 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 04:36:50 crc kubenswrapper[4796]: > Dec 12 04:36:51 crc kubenswrapper[4796]: E1212 04:36:51.502103 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d516edf_2b45_43a4_b506_b3b0e8f10a26.slice/crio-conmon-1746a89c63bd770963c73dd48256e7ae8727c88a36611537898fdeadd9b467f3.scope\": RecentStats: unable to find data in memory cache]" Dec 12 04:36:51 crc kubenswrapper[4796]: I1212 04:36:51.858910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerStarted","Data":"d1bdc895aadc44d6c80cab59c7d72921508a9d6e5126fde44b690d1a66043cf6"} Dec 12 04:36:51 crc kubenswrapper[4796]: I1212 04:36:51.868324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerStarted","Data":"5621fddc37285dc0608caf9599902041447d0ad4ee9db0485ffa1333434d37a2"} Dec 12 04:36:51 crc kubenswrapper[4796]: I1212 04:36:51.872515 4796 generic.go:334] "Generic (PLEG): container finished" podID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerID="1746a89c63bd770963c73dd48256e7ae8727c88a36611537898fdeadd9b467f3" exitCode=0 Dec 12 04:36:51 crc kubenswrapper[4796]: I1212 04:36:51.872560 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerDied","Data":"1746a89c63bd770963c73dd48256e7ae8727c88a36611537898fdeadd9b467f3"} Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.098844 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.255348 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-catalog-content\") pod \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.255425 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-utilities\") pod \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.255496 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc2wl\" (UniqueName: \"kubernetes.io/projected/4d516edf-2b45-43a4-b506-b3b0e8f10a26-kube-api-access-jc2wl\") pod \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\" (UID: \"4d516edf-2b45-43a4-b506-b3b0e8f10a26\") " Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.257350 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-utilities" (OuterVolumeSpecName: "utilities") pod "4d516edf-2b45-43a4-b506-b3b0e8f10a26" (UID: "4d516edf-2b45-43a4-b506-b3b0e8f10a26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.265479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d516edf-2b45-43a4-b506-b3b0e8f10a26-kube-api-access-jc2wl" (OuterVolumeSpecName: "kube-api-access-jc2wl") pod "4d516edf-2b45-43a4-b506-b3b0e8f10a26" (UID: "4d516edf-2b45-43a4-b506-b3b0e8f10a26"). InnerVolumeSpecName "kube-api-access-jc2wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.295538 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d516edf-2b45-43a4-b506-b3b0e8f10a26" (UID: "4d516edf-2b45-43a4-b506-b3b0e8f10a26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.357801 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.358078 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d516edf-2b45-43a4-b506-b3b0e8f10a26-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.358193 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc2wl\" (UniqueName: \"kubernetes.io/projected/4d516edf-2b45-43a4-b506-b3b0e8f10a26-kube-api-access-jc2wl\") on node \"crc\" DevicePath \"\"" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.884647 4796 generic.go:334] "Generic (PLEG): container finished" podID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerID="f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209" exitCode=0 Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.884743 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwntv" event={"ID":"abee6136-ab46-48f3-987d-e9d070b4ee80","Type":"ContainerDied","Data":"f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209"} Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.888726 4796 generic.go:334] "Generic (PLEG): container finished" podID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerID="c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82" exitCode=0 Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.888788 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pglbm" event={"ID":"a365526d-630d-4ebf-8d4f-98c944e6eee3","Type":"ContainerDied","Data":"c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82"} Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.893727 4796 generic.go:334] "Generic (PLEG): container finished" podID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerID="d1bdc895aadc44d6c80cab59c7d72921508a9d6e5126fde44b690d1a66043cf6" exitCode=0 Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.893803 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerDied","Data":"d1bdc895aadc44d6c80cab59c7d72921508a9d6e5126fde44b690d1a66043cf6"} Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.900026 4796 generic.go:334] "Generic (PLEG): container finished" podID="691c960a-4615-4c81-adba-c840acf2a99e" containerID="5621fddc37285dc0608caf9599902041447d0ad4ee9db0485ffa1333434d37a2" exitCode=0 Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.900132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerDied","Data":"5621fddc37285dc0608caf9599902041447d0ad4ee9db0485ffa1333434d37a2"} Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.907859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5pmg" event={"ID":"4d516edf-2b45-43a4-b506-b3b0e8f10a26","Type":"ContainerDied","Data":"4002764d0fd14a1ada833ca7a0c3b1a489bb99c3a4309d885108fc176f5284d9"} Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.907914 4796 scope.go:117] "RemoveContainer" containerID="1746a89c63bd770963c73dd48256e7ae8727c88a36611537898fdeadd9b467f3" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.908093 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5pmg" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.934546 4796 scope.go:117] "RemoveContainer" containerID="80d97c13b7a78fced96305e37a69f0f1edd1e150a68739415f006df2b3cf7850" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.960661 4796 scope.go:117] "RemoveContainer" containerID="e8583145232595ecec3c34df655cac5d53358ea1d90f3d9e9600025129b4d058" Dec 12 04:36:52 crc kubenswrapper[4796]: I1212 04:36:52.997842 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5pmg"] Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.000690 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5pmg"] Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.417293 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" path="/var/lib/kubelet/pods/4d516edf-2b45-43a4-b506-b3b0e8f10a26/volumes" Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.915177 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pglbm" event={"ID":"a365526d-630d-4ebf-8d4f-98c944e6eee3","Type":"ContainerStarted","Data":"b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4"} Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.917372 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerStarted","Data":"d721cc5923eab073a92ef76aafe91da26fd20167024365dfdba626eaa772e832"} Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.919462 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerStarted","Data":"bbc4aded6ce571ffb2e9a3ef1f0efa209c855ff4a5bb2bd671203ec6f7cfa639"} Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.922300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwntv" event={"ID":"abee6136-ab46-48f3-987d-e9d070b4ee80","Type":"ContainerStarted","Data":"28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11"} Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.937421 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pglbm" podStartSLOduration=2.789946429 podStartE2EDuration="58.937404364s" podCreationTimestamp="2025-12-12 04:35:55 +0000 UTC" firstStartedPulling="2025-12-12 04:35:57.257687644 +0000 UTC m=+148.133704791" lastFinishedPulling="2025-12-12 04:36:53.405145579 +0000 UTC m=+204.281162726" observedRunningTime="2025-12-12 04:36:53.936073591 +0000 UTC m=+204.812090748" watchObservedRunningTime="2025-12-12 04:36:53.937404364 +0000 UTC m=+204.813421511" Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.985121 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4zc4" podStartSLOduration=3.8861447890000003 podStartE2EDuration="58.985104674s" podCreationTimestamp="2025-12-12 04:35:55 +0000 UTC" firstStartedPulling="2025-12-12 04:35:58.245798765 +0000 UTC m=+149.121815912" lastFinishedPulling="2025-12-12 04:36:53.34475865 +0000 UTC m=+204.220775797" observedRunningTime="2025-12-12 04:36:53.971373481 +0000 UTC m=+204.847390638" watchObservedRunningTime="2025-12-12 04:36:53.985104674 +0000 UTC m=+204.861121821" Dec 12 04:36:53 crc kubenswrapper[4796]: I1212 04:36:53.987315 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hwntv" podStartSLOduration=3.643321755 podStartE2EDuration="58.987306965s" podCreationTimestamp="2025-12-12 04:35:55 +0000 UTC" firstStartedPulling="2025-12-12 04:35:58.232383807 +0000 UTC m=+149.108400954" lastFinishedPulling="2025-12-12 04:36:53.576369017 +0000 UTC m=+204.452386164" observedRunningTime="2025-12-12 04:36:53.984642449 +0000 UTC m=+204.860659596" watchObservedRunningTime="2025-12-12 04:36:53.987306965 +0000 UTC m=+204.863324112" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.256564 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.256642 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.440964 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.441077 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.619536 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.620004 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.673055 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.698858 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9sbmb" podStartSLOduration=5.433048999 podStartE2EDuration="1m1.698835744s" podCreationTimestamp="2025-12-12 04:35:54 +0000 UTC" firstStartedPulling="2025-12-12 04:35:57.089199145 +0000 UTC m=+147.965216292" lastFinishedPulling="2025-12-12 04:36:53.35498588 +0000 UTC m=+204.231003037" observedRunningTime="2025-12-12 04:36:54.005161852 +0000 UTC m=+204.881179009" watchObservedRunningTime="2025-12-12 04:36:55.698835744 +0000 UTC m=+206.574852901" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.884956 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.885363 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:36:55 crc kubenswrapper[4796]: I1212 04:36:55.935674 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:36:56 crc kubenswrapper[4796]: I1212 04:36:56.295754 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9sbmb" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="registry-server" probeResult="failure" output=< Dec 12 04:36:56 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 04:36:56 crc kubenswrapper[4796]: > Dec 12 04:36:56 crc kubenswrapper[4796]: I1212 04:36:56.482938 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pglbm" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="registry-server" probeResult="failure" output=< Dec 12 04:36:56 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 04:36:56 crc kubenswrapper[4796]: > Dec 12 04:36:57 crc kubenswrapper[4796]: I1212 04:36:57.404663 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:36:59 crc kubenswrapper[4796]: I1212 04:36:59.141564 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:36:59 crc kubenswrapper[4796]: I1212 04:36:59.198869 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:37:00 crc kubenswrapper[4796]: I1212 04:37:00.245491 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerName="oauth-openshift" containerID="cri-o://4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc" gracePeriod=15 Dec 12 04:37:01 crc kubenswrapper[4796]: I1212 04:37:01.769893 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nbl2"] Dec 12 04:37:01 crc kubenswrapper[4796]: I1212 04:37:01.770256 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8nbl2" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="registry-server" containerID="cri-o://73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda" gracePeriod=2 Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.447005 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483336 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9"] Dec 12 04:37:02 crc kubenswrapper[4796]: E1212 04:37:02.483581 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a6fe68-5faf-4340-b49b-98283df8aa38" containerName="pruner" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483594 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a6fe68-5faf-4340-b49b-98283df8aa38" containerName="pruner" Dec 12 04:37:02 crc kubenswrapper[4796]: E1212 04:37:02.483606 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="extract-utilities" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483613 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="extract-utilities" Dec 12 04:37:02 crc kubenswrapper[4796]: E1212 04:37:02.483625 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="extract-content" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483633 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="extract-content" Dec 12 04:37:02 crc kubenswrapper[4796]: E1212 04:37:02.483647 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="registry-server" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483653 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="registry-server" Dec 12 04:37:02 crc kubenswrapper[4796]: E1212 04:37:02.483668 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerName="oauth-openshift" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483675 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerName="oauth-openshift" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483802 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a6fe68-5faf-4340-b49b-98283df8aa38" containerName="pruner" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483815 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerName="oauth-openshift" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.483829 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d516edf-2b45-43a4-b506-b3b0e8f10a26" containerName="registry-server" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.484275 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.501884 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9"] Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.591996 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-trusted-ca-bundle\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592344 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-login\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592376 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-cliconfig\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592785 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592395 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-provider-selection\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592861 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-dir\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592903 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-session\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-router-certs\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592953 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.592970 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-error\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593005 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-policies\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593048 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-ocp-branding-template\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593072 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-service-ca\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593092 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-idp-0-file-data\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593115 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-serving-cert\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593150 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vvx\" (UniqueName: \"kubernetes.io/projected/ada430eb-6dc8-4516-87df-5dbdc97b5563-kube-api-access-p8vvx\") pod \"ada430eb-6dc8-4516-87df-5dbdc97b5563\" (UID: \"ada430eb-6dc8-4516-87df-5dbdc97b5563\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593201 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593324 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhs7b\" (UniqueName: \"kubernetes.io/projected/01213026-56aa-4ae5-9ffc-d25d8aac679e-kube-api-access-dhs7b\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-login\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593477 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-audit-policies\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-session\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01213026-56aa-4ae5-9ffc-d25d8aac679e-audit-dir\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593498 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593561 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593598 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-error\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593691 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593720 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593793 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593792 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593807 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593819 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.593830 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.598825 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.599024 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.602811 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.603153 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada430eb-6dc8-4516-87df-5dbdc97b5563-kube-api-access-p8vvx" (OuterVolumeSpecName: "kube-api-access-p8vvx") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "kube-api-access-p8vvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.611243 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.612475 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.612532 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.617739 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.619393 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ada430eb-6dc8-4516-87df-5dbdc97b5563" (UID: "ada430eb-6dc8-4516-87df-5dbdc97b5563"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.653013 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.694974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695318 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-audit-policies\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695450 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-session\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01213026-56aa-4ae5-9ffc-d25d8aac679e-audit-dir\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01213026-56aa-4ae5-9ffc-d25d8aac679e-audit-dir\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695676 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695753 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-error\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695867 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.695987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhs7b\" (UniqueName: \"kubernetes.io/projected/01213026-56aa-4ae5-9ffc-d25d8aac679e-kube-api-access-dhs7b\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-login\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696096 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696109 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vvx\" (UniqueName: \"kubernetes.io/projected/ada430eb-6dc8-4516-87df-5dbdc97b5563-kube-api-access-p8vvx\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696121 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696131 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696142 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696152 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696185 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696198 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-audit-policies\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696207 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.696260 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ada430eb-6dc8-4516-87df-5dbdc97b5563-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.698088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.698172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.698655 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.698668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.698929 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.699581 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.699639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.699973 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-system-session\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.700296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-error\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.707781 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-login\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.708626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01213026-56aa-4ae5-9ffc-d25d8aac679e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.715558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhs7b\" (UniqueName: \"kubernetes.io/projected/01213026-56aa-4ae5-9ffc-d25d8aac679e-kube-api-access-dhs7b\") pod \"oauth-openshift-6bdc6d4cf9-fk9q9\" (UID: \"01213026-56aa-4ae5-9ffc-d25d8aac679e\") " pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.797530 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-utilities\") pod \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.797591 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-catalog-content\") pod \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.797669 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzvtl\" (UniqueName: \"kubernetes.io/projected/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-kube-api-access-hzvtl\") pod \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\" (UID: \"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6\") " Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.799986 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-utilities" (OuterVolumeSpecName: "utilities") pod "ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" (UID: "ddfb430c-f96f-4e15-b0c6-43b17ec00ca6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.801460 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-kube-api-access-hzvtl" (OuterVolumeSpecName: "kube-api-access-hzvtl") pod "ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" (UID: "ddfb430c-f96f-4e15-b0c6-43b17ec00ca6"). InnerVolumeSpecName "kube-api-access-hzvtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.807024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.899359 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzvtl\" (UniqueName: \"kubernetes.io/projected/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-kube-api-access-hzvtl\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.899387 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.969544 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.969595 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.969637 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.970244 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.970361 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5" gracePeriod=600 Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.975069 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" (UID: "ddfb430c-f96f-4e15-b0c6-43b17ec00ca6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.998061 4796 generic.go:334] "Generic (PLEG): container finished" podID="ada430eb-6dc8-4516-87df-5dbdc97b5563" containerID="4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc" exitCode=0 Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.998189 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.998535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" event={"ID":"ada430eb-6dc8-4516-87df-5dbdc97b5563","Type":"ContainerDied","Data":"4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc"} Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.998596 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4tb6c" event={"ID":"ada430eb-6dc8-4516-87df-5dbdc97b5563","Type":"ContainerDied","Data":"056c896a8c3fc97f2d6b96aebd73d476e997e97d1555884adeeb0d43e3e14df7"} Dec 12 04:37:02 crc kubenswrapper[4796]: I1212 04:37:02.998617 4796 scope.go:117] "RemoveContainer" containerID="4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.000862 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.006092 4796 generic.go:334] "Generic (PLEG): container finished" podID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerID="73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda" exitCode=0 Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.006119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerDied","Data":"73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda"} Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.006143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nbl2" event={"ID":"ddfb430c-f96f-4e15-b0c6-43b17ec00ca6","Type":"ContainerDied","Data":"77287c42b25b928268957b05c4587da1ba04301cf145964c8a8a1b5185cf1860"} Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.006195 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nbl2" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.021271 4796 scope.go:117] "RemoveContainer" containerID="4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc" Dec 12 04:37:03 crc kubenswrapper[4796]: E1212 04:37:03.021660 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc\": container with ID starting with 4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc not found: ID does not exist" containerID="4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.021704 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc"} err="failed to get container status \"4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc\": rpc error: code = NotFound desc = could not find container \"4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc\": container with ID starting with 4fd31501f03cf874659bccef18b2c73abc319aa2f16d084abed79e5c42a45acc not found: ID does not exist" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.021730 4796 scope.go:117] "RemoveContainer" containerID="73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.042831 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4tb6c"] Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.046639 4796 scope.go:117] "RemoveContainer" containerID="81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.049704 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4tb6c"] Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.055436 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nbl2"] Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.063832 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8nbl2"] Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.064193 4796 scope.go:117] "RemoveContainer" containerID="3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.081569 4796 scope.go:117] "RemoveContainer" containerID="73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda" Dec 12 04:37:03 crc kubenswrapper[4796]: E1212 04:37:03.081953 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda\": container with ID starting with 73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda not found: ID does not exist" containerID="73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.081978 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda"} err="failed to get container status \"73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda\": rpc error: code = NotFound desc = could not find container \"73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda\": container with ID starting with 73523107c8161ce849467e998d8ae09c0a29d76bda55e6b034858d49fce1bdda not found: ID does not exist" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.081998 4796 scope.go:117] "RemoveContainer" containerID="81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b" Dec 12 04:37:03 crc kubenswrapper[4796]: E1212 04:37:03.082323 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b\": container with ID starting with 81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b not found: ID does not exist" containerID="81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.082344 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b"} err="failed to get container status \"81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b\": rpc error: code = NotFound desc = could not find container \"81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b\": container with ID starting with 81e196b1408f507321022374254f99fb09a9bdc356d8b6e2de6760e8c16b750b not found: ID does not exist" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.082356 4796 scope.go:117] "RemoveContainer" containerID="3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8" Dec 12 04:37:03 crc kubenswrapper[4796]: E1212 04:37:03.082738 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8\": container with ID starting with 3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8 not found: ID does not exist" containerID="3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.082822 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8"} err="failed to get container status \"3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8\": rpc error: code = NotFound desc = could not find container \"3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8\": container with ID starting with 3163ea6abc971ab4030f33f62469e376f33fb9d3ba80ec333b28761ec8d2b8e8 not found: ID does not exist" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.231173 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9"] Dec 12 04:37:03 crc kubenswrapper[4796]: W1212 04:37:03.238466 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01213026_56aa_4ae5_9ffc_d25d8aac679e.slice/crio-b7732125f316fcd3af8dccdf09ab43f8498462b137eb3516b1107c0a5043954d WatchSource:0}: Error finding container b7732125f316fcd3af8dccdf09ab43f8498462b137eb3516b1107c0a5043954d: Status 404 returned error can't find the container with id b7732125f316fcd3af8dccdf09ab43f8498462b137eb3516b1107c0a5043954d Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.422209 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada430eb-6dc8-4516-87df-5dbdc97b5563" path="/var/lib/kubelet/pods/ada430eb-6dc8-4516-87df-5dbdc97b5563/volumes" Dec 12 04:37:03 crc kubenswrapper[4796]: I1212 04:37:03.423723 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" path="/var/lib/kubelet/pods/ddfb430c-f96f-4e15-b0c6-43b17ec00ca6/volumes" Dec 12 04:37:04 crc kubenswrapper[4796]: I1212 04:37:04.021859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" event={"ID":"01213026-56aa-4ae5-9ffc-d25d8aac679e","Type":"ContainerStarted","Data":"b7732125f316fcd3af8dccdf09ab43f8498462b137eb3516b1107c0a5043954d"} Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.029128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" event={"ID":"01213026-56aa-4ae5-9ffc-d25d8aac679e","Type":"ContainerStarted","Data":"14fba7544ae53dc1a0d07b805bf0258e78ad48ec8134768eba172e1d3c884600"} Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.030022 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.033788 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5" exitCode=0 Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.033835 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5"} Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.037682 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.079863 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bdc6d4cf9-fk9q9" podStartSLOduration=30.079836313 podStartE2EDuration="30.079836313s" podCreationTimestamp="2025-12-12 04:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:37:05.055676433 +0000 UTC m=+215.931693610" watchObservedRunningTime="2025-12-12 04:37:05.079836313 +0000 UTC m=+215.955853470" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.312628 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.403793 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.484817 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.524863 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.658098 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:37:05 crc kubenswrapper[4796]: I1212 04:37:05.943606 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:37:06 crc kubenswrapper[4796]: I1212 04:37:06.040911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"827c3757d54f0b605c07f0ca45f13181c33c63c44a363eb4f827fef0f73982df"} Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.162127 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwntv"] Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.162523 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hwntv" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="registry-server" containerID="cri-o://28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11" gracePeriod=2 Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.539619 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.667469 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-catalog-content\") pod \"abee6136-ab46-48f3-987d-e9d070b4ee80\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.667655 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-utilities\") pod \"abee6136-ab46-48f3-987d-e9d070b4ee80\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.667751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvmg6\" (UniqueName: \"kubernetes.io/projected/abee6136-ab46-48f3-987d-e9d070b4ee80-kube-api-access-jvmg6\") pod \"abee6136-ab46-48f3-987d-e9d070b4ee80\" (UID: \"abee6136-ab46-48f3-987d-e9d070b4ee80\") " Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.669735 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-utilities" (OuterVolumeSpecName: "utilities") pod "abee6136-ab46-48f3-987d-e9d070b4ee80" (UID: "abee6136-ab46-48f3-987d-e9d070b4ee80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.676160 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abee6136-ab46-48f3-987d-e9d070b4ee80-kube-api-access-jvmg6" (OuterVolumeSpecName: "kube-api-access-jvmg6") pod "abee6136-ab46-48f3-987d-e9d070b4ee80" (UID: "abee6136-ab46-48f3-987d-e9d070b4ee80"). InnerVolumeSpecName "kube-api-access-jvmg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.733737 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abee6136-ab46-48f3-987d-e9d070b4ee80" (UID: "abee6136-ab46-48f3-987d-e9d070b4ee80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.763913 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4zc4"] Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.764175 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4zc4" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="registry-server" containerID="cri-o://d721cc5923eab073a92ef76aafe91da26fd20167024365dfdba626eaa772e832" gracePeriod=2 Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.771315 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.771348 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvmg6\" (UniqueName: \"kubernetes.io/projected/abee6136-ab46-48f3-987d-e9d070b4ee80-kube-api-access-jvmg6\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:07 crc kubenswrapper[4796]: I1212 04:37:07.771361 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abee6136-ab46-48f3-987d-e9d070b4ee80-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.055480 4796 generic.go:334] "Generic (PLEG): container finished" podID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerID="28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11" exitCode=0 Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.055546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwntv" event={"ID":"abee6136-ab46-48f3-987d-e9d070b4ee80","Type":"ContainerDied","Data":"28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11"} Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.055575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwntv" event={"ID":"abee6136-ab46-48f3-987d-e9d070b4ee80","Type":"ContainerDied","Data":"7f9dbc3ce44e49b6d74422c1f79bedee1cb360d654d5535c9a5a1e7dff55cf38"} Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.055594 4796 scope.go:117] "RemoveContainer" containerID="28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.055703 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwntv" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.070316 4796 generic.go:334] "Generic (PLEG): container finished" podID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerID="d721cc5923eab073a92ef76aafe91da26fd20167024365dfdba626eaa772e832" exitCode=0 Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.070359 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerDied","Data":"d721cc5923eab073a92ef76aafe91da26fd20167024365dfdba626eaa772e832"} Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.095435 4796 scope.go:117] "RemoveContainer" containerID="f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.099368 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwntv"] Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.099428 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hwntv"] Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.123104 4796 scope.go:117] "RemoveContainer" containerID="bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.138418 4796 scope.go:117] "RemoveContainer" containerID="28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11" Dec 12 04:37:08 crc kubenswrapper[4796]: E1212 04:37:08.138909 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11\": container with ID starting with 28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11 not found: ID does not exist" containerID="28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.138951 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11"} err="failed to get container status \"28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11\": rpc error: code = NotFound desc = could not find container \"28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11\": container with ID starting with 28d4a8a5212d308124cf000146b0533574ce092d481a370bdc39e12001d6ca11 not found: ID does not exist" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.138976 4796 scope.go:117] "RemoveContainer" containerID="f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209" Dec 12 04:37:08 crc kubenswrapper[4796]: E1212 04:37:08.139345 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209\": container with ID starting with f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209 not found: ID does not exist" containerID="f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.139397 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209"} err="failed to get container status \"f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209\": rpc error: code = NotFound desc = could not find container \"f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209\": container with ID starting with f4900161d325f8cc925bc3a8d83794322f41152ac91aad7fd0f44203f2233209 not found: ID does not exist" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.139417 4796 scope.go:117] "RemoveContainer" containerID="bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3" Dec 12 04:37:08 crc kubenswrapper[4796]: E1212 04:37:08.139938 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3\": container with ID starting with bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3 not found: ID does not exist" containerID="bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.139982 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3"} err="failed to get container status \"bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3\": rpc error: code = NotFound desc = could not find container \"bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3\": container with ID starting with bcde9d028f3bafe2123fdb673f8e4e6ab7dcae896f24c1d0f3fa0ba26f34c5a3 not found: ID does not exist" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.142673 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.277018 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-catalog-content\") pod \"ad7c0a82-c54e-4675-941d-8fecd137719b\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.277088 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-utilities\") pod \"ad7c0a82-c54e-4675-941d-8fecd137719b\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.277147 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cqqr\" (UniqueName: \"kubernetes.io/projected/ad7c0a82-c54e-4675-941d-8fecd137719b-kube-api-access-8cqqr\") pod \"ad7c0a82-c54e-4675-941d-8fecd137719b\" (UID: \"ad7c0a82-c54e-4675-941d-8fecd137719b\") " Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.278197 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-utilities" (OuterVolumeSpecName: "utilities") pod "ad7c0a82-c54e-4675-941d-8fecd137719b" (UID: "ad7c0a82-c54e-4675-941d-8fecd137719b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.281224 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7c0a82-c54e-4675-941d-8fecd137719b-kube-api-access-8cqqr" (OuterVolumeSpecName: "kube-api-access-8cqqr") pod "ad7c0a82-c54e-4675-941d-8fecd137719b" (UID: "ad7c0a82-c54e-4675-941d-8fecd137719b"). InnerVolumeSpecName "kube-api-access-8cqqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.334816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7c0a82-c54e-4675-941d-8fecd137719b" (UID: "ad7c0a82-c54e-4675-941d-8fecd137719b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.378814 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.378860 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7c0a82-c54e-4675-941d-8fecd137719b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:08 crc kubenswrapper[4796]: I1212 04:37:08.378874 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cqqr\" (UniqueName: \"kubernetes.io/projected/ad7c0a82-c54e-4675-941d-8fecd137719b-kube-api-access-8cqqr\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.078025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4zc4" event={"ID":"ad7c0a82-c54e-4675-941d-8fecd137719b","Type":"ContainerDied","Data":"3ba51b2d8ea08b6afd1c08f03f510dd49e6e5dc67b2d93a830b713b3108dcba1"} Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.078302 4796 scope.go:117] "RemoveContainer" containerID="d721cc5923eab073a92ef76aafe91da26fd20167024365dfdba626eaa772e832" Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.078394 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4zc4" Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.092464 4796 scope.go:117] "RemoveContainer" containerID="d1bdc895aadc44d6c80cab59c7d72921508a9d6e5126fde44b690d1a66043cf6" Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.107914 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4zc4"] Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.111179 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4zc4"] Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.111422 4796 scope.go:117] "RemoveContainer" containerID="ec32d64ff628d2897ab972d1c33e4422c06e7ad62a2a8580e2816a41e52d1302" Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.424164 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" path="/var/lib/kubelet/pods/abee6136-ab46-48f3-987d-e9d070b4ee80/volumes" Dec 12 04:37:09 crc kubenswrapper[4796]: I1212 04:37:09.425709 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" path="/var/lib/kubelet/pods/ad7c0a82-c54e-4675-941d-8fecd137719b/volumes" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.394621 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.395244 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8" gracePeriod=15 Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.395344 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5" gracePeriod=15 Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.395384 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6" gracePeriod=15 Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.395397 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd" gracePeriod=15 Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.395402 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f" gracePeriod=15 Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396348 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396574 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396597 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396605 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="extract-content" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396611 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="extract-content" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396621 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396628 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396638 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="extract-content" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396644 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="extract-content" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396652 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="extract-utilities" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396658 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="extract-utilities" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396667 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396673 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396682 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396687 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396696 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396702 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396709 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="extract-content" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396714 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="extract-content" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396724 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="extract-utilities" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396730 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="extract-utilities" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396738 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396744 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396751 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396756 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396764 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396769 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396776 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396782 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396792 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="extract-utilities" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396799 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="extract-utilities" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.396809 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396815 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396902 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396912 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="abee6136-ab46-48f3-987d-e9d070b4ee80" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396922 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396930 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396939 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396946 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7c0a82-c54e-4675-941d-8fecd137719b" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396954 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396960 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.396968 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddfb430c-f96f-4e15-b0c6-43b17ec00ca6" containerName="registry-server" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.400145 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.400928 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.404538 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.451145 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533562 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533641 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533806 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533868 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.533888 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.634981 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635135 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635239 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635444 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635525 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635627 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635291 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635659 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635679 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635720 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.635740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: I1212 04:37:20.752487 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:20 crc kubenswrapper[4796]: W1212 04:37:20.771055 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e911f318342c03ca784caeaee8eb1cc1404cf955b1c53bb854b32b602fadc817 WatchSource:0}: Error finding container e911f318342c03ca784caeaee8eb1cc1404cf955b1c53bb854b32b602fadc817: Status 404 returned error can't find the container with id e911f318342c03ca784caeaee8eb1cc1404cf955b1c53bb854b32b602fadc817 Dec 12 04:37:20 crc kubenswrapper[4796]: E1212 04:37:20.776933 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18805dd90f2efbbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 04:37:20.773258175 +0000 UTC m=+231.649275312,LastTimestamp:2025-12-12 04:37:20.773258175 +0000 UTC m=+231.649275312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.156517 4796 generic.go:334] "Generic (PLEG): container finished" podID="1c831a3f-3013-431c-a451-b853f8162e02" containerID="04bbd2efd0592ac7c6155fc3c0b8315f3ebcc4890c5d9b2d63eb48981da4619d" exitCode=0 Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.156594 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1c831a3f-3013-431c-a451-b853f8162e02","Type":"ContainerDied","Data":"04bbd2efd0592ac7c6155fc3c0b8315f3ebcc4890c5d9b2d63eb48981da4619d"} Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.157351 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.158915 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.160061 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.160654 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5" exitCode=0 Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.160675 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6" exitCode=0 Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.160683 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd" exitCode=0 Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.160690 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f" exitCode=2 Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.160751 4796 scope.go:117] "RemoveContainer" containerID="bde00fd11282826671a6c043fa53356b6058a48206266e2f57fcb39b2e5bedf1" Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.162463 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd"} Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.162498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e911f318342c03ca784caeaee8eb1cc1404cf955b1c53bb854b32b602fadc817"} Dec 12 04:37:21 crc kubenswrapper[4796]: E1212 04:37:21.163055 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:37:21 crc kubenswrapper[4796]: I1212 04:37:21.163067 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.169935 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.426407 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.427153 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.606398 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-kubelet-dir\") pod \"1c831a3f-3013-431c-a451-b853f8162e02\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.606726 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c831a3f-3013-431c-a451-b853f8162e02-kube-api-access\") pod \"1c831a3f-3013-431c-a451-b853f8162e02\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.607173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-var-lock\") pod \"1c831a3f-3013-431c-a451-b853f8162e02\" (UID: \"1c831a3f-3013-431c-a451-b853f8162e02\") " Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.606539 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c831a3f-3013-431c-a451-b853f8162e02" (UID: "1c831a3f-3013-431c-a451-b853f8162e02"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.607207 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-var-lock" (OuterVolumeSpecName: "var-lock") pod "1c831a3f-3013-431c-a451-b853f8162e02" (UID: "1c831a3f-3013-431c-a451-b853f8162e02"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.607690 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.607797 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c831a3f-3013-431c-a451-b853f8162e02-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.621422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c831a3f-3013-431c-a451-b853f8162e02-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c831a3f-3013-431c-a451-b853f8162e02" (UID: "1c831a3f-3013-431c-a451-b853f8162e02"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.709597 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c831a3f-3013-431c-a451-b853f8162e02-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.755322 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.755927 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.756500 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.756886 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.810770 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.810810 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.810855 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.811104 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.811108 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.811208 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.811208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.912464 4796 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:22 crc kubenswrapper[4796]: I1212 04:37:22.912490 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.176904 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.176902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1c831a3f-3013-431c-a451-b853f8162e02","Type":"ContainerDied","Data":"7fc208151b0a4b3b95c11d842f11546d06a98451db405451f7e50d6a40b94bd4"} Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.177849 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc208151b0a4b3b95c11d842f11546d06a98451db405451f7e50d6a40b94bd4" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.179323 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.179964 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8" exitCode=0 Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.180013 4796 scope.go:117] "RemoveContainer" containerID="0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.180163 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.192336 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.192843 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.194207 4796 scope.go:117] "RemoveContainer" containerID="1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.206161 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.206481 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.212842 4796 scope.go:117] "RemoveContainer" containerID="0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.233225 4796 scope.go:117] "RemoveContainer" containerID="3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.249908 4796 scope.go:117] "RemoveContainer" containerID="32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.265406 4796 scope.go:117] "RemoveContainer" containerID="dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.288323 4796 scope.go:117] "RemoveContainer" containerID="0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5" Dec 12 04:37:23 crc kubenswrapper[4796]: E1212 04:37:23.288753 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\": container with ID starting with 0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5 not found: ID does not exist" containerID="0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.288893 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5"} err="failed to get container status \"0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\": rpc error: code = NotFound desc = could not find container \"0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5\": container with ID starting with 0dfb46dc2514953717e6dae32581364c8bcc9bf8873361246186b895666c06e5 not found: ID does not exist" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.288983 4796 scope.go:117] "RemoveContainer" containerID="1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6" Dec 12 04:37:23 crc kubenswrapper[4796]: E1212 04:37:23.289756 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\": container with ID starting with 1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6 not found: ID does not exist" containerID="1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.289821 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6"} err="failed to get container status \"1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\": rpc error: code = NotFound desc = could not find container \"1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6\": container with ID starting with 1b7bd4fe1849299978a69a35e08f05231d0a0e295f3559a309ba547416a7fdf6 not found: ID does not exist" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.289844 4796 scope.go:117] "RemoveContainer" containerID="0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd" Dec 12 04:37:23 crc kubenswrapper[4796]: E1212 04:37:23.290309 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\": container with ID starting with 0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd not found: ID does not exist" containerID="0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.290388 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd"} err="failed to get container status \"0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\": rpc error: code = NotFound desc = could not find container \"0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd\": container with ID starting with 0101ea451e9798662549f5cb15f75c063851309b6709b666c3783474f0c3e9cd not found: ID does not exist" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.290564 4796 scope.go:117] "RemoveContainer" containerID="3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f" Dec 12 04:37:23 crc kubenswrapper[4796]: E1212 04:37:23.290876 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\": container with ID starting with 3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f not found: ID does not exist" containerID="3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.290919 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f"} err="failed to get container status \"3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\": rpc error: code = NotFound desc = could not find container \"3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f\": container with ID starting with 3e6406128695a86e978dfdbfddb72e3dfdc6b5ad51daef60badbc086ef44581f not found: ID does not exist" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.290947 4796 scope.go:117] "RemoveContainer" containerID="32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8" Dec 12 04:37:23 crc kubenswrapper[4796]: E1212 04:37:23.291482 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\": container with ID starting with 32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8 not found: ID does not exist" containerID="32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.291511 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8"} err="failed to get container status \"32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\": rpc error: code = NotFound desc = could not find container \"32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8\": container with ID starting with 32926142ee8f3fbe39e922f59c3e136d6b7a5045732be3ca1cd2afb921e9caf8 not found: ID does not exist" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.291580 4796 scope.go:117] "RemoveContainer" containerID="dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db" Dec 12 04:37:23 crc kubenswrapper[4796]: E1212 04:37:23.291794 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\": container with ID starting with dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db not found: ID does not exist" containerID="dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.291881 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db"} err="failed to get container status \"dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\": rpc error: code = NotFound desc = could not find container \"dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db\": container with ID starting with dded6db4a90cf9858929050ead81001736a094d77f7e30caca0edf0ca383d9db not found: ID does not exist" Dec 12 04:37:23 crc kubenswrapper[4796]: I1212 04:37:23.416924 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.468990 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.469530 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.469876 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.470332 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.470522 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:24 crc kubenswrapper[4796]: I1212 04:37:24.470541 4796 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.470843 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Dec 12 04:37:24 crc kubenswrapper[4796]: E1212 04:37:24.671724 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Dec 12 04:37:25 crc kubenswrapper[4796]: E1212 04:37:25.073043 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Dec 12 04:37:25 crc kubenswrapper[4796]: E1212 04:37:25.873860 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Dec 12 04:37:26 crc kubenswrapper[4796]: E1212 04:37:26.226765 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18805dd90f2efbbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 04:37:20.773258175 +0000 UTC m=+231.649275312,LastTimestamp:2025-12-12 04:37:20.773258175 +0000 UTC m=+231.649275312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 04:37:27 crc kubenswrapper[4796]: E1212 04:37:27.474953 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Dec 12 04:37:29 crc kubenswrapper[4796]: I1212 04:37:29.415573 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:30 crc kubenswrapper[4796]: E1212 04:37:30.676403 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Dec 12 04:37:31 crc kubenswrapper[4796]: I1212 04:37:31.410596 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:31 crc kubenswrapper[4796]: I1212 04:37:31.411717 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:31 crc kubenswrapper[4796]: I1212 04:37:31.443225 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:31 crc kubenswrapper[4796]: I1212 04:37:31.443394 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:31 crc kubenswrapper[4796]: E1212 04:37:31.443731 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:31 crc kubenswrapper[4796]: I1212 04:37:31.444362 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:31 crc kubenswrapper[4796]: W1212 04:37:31.466448 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-13741c0ec2d57c75861820ad0bb0b6d808f6e7480248df86ac57edf5331f7a4a WatchSource:0}: Error finding container 13741c0ec2d57c75861820ad0bb0b6d808f6e7480248df86ac57edf5331f7a4a: Status 404 returned error can't find the container with id 13741c0ec2d57c75861820ad0bb0b6d808f6e7480248df86ac57edf5331f7a4a Dec 12 04:37:32 crc kubenswrapper[4796]: E1212 04:37:32.026855 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 12 04:37:32 crc kubenswrapper[4796]: I1212 04:37:32.226619 4796 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="97b51a2e8b8187c9257043c79e4bb80df8cc5810cdd7eb50cffa3ae72363d05a" exitCode=0 Dec 12 04:37:32 crc kubenswrapper[4796]: I1212 04:37:32.226661 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"97b51a2e8b8187c9257043c79e4bb80df8cc5810cdd7eb50cffa3ae72363d05a"} Dec 12 04:37:32 crc kubenswrapper[4796]: I1212 04:37:32.226692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"13741c0ec2d57c75861820ad0bb0b6d808f6e7480248df86ac57edf5331f7a4a"} Dec 12 04:37:32 crc kubenswrapper[4796]: I1212 04:37:32.226945 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:32 crc kubenswrapper[4796]: I1212 04:37:32.226961 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:32 crc kubenswrapper[4796]: I1212 04:37:32.227158 4796 status_manager.go:851] "Failed to get status for pod" podUID="1c831a3f-3013-431c-a451-b853f8162e02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Dec 12 04:37:32 crc kubenswrapper[4796]: E1212 04:37:32.227166 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.235987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54814b6305e75217ba1975d655046e78bebf102717d6fc7fa52c8176dcb844c2"} Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.236338 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"223adf29bed90318ea209037f94e0efc3fca88a9495abbe1fa592b6f7c282b2d"} Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.236352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"77a36fa6de2b85b1559b8da89f4115fe3703ed9669b2073b2c899c0ab9616230"} Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.236362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3cdb8cdb0276216cbd8aea46da21ddb5e7c0d3b28b3f1a38b1ba78adcd1a3d90"} Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.540357 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:51132->192.168.126.11:10257: read: connection reset by peer" start-of-body= Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.540407 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:51132->192.168.126.11:10257: read: connection reset by peer" Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.954030 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 04:37:33 crc kubenswrapper[4796]: I1212 04:37:33.954122 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.245419 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.245468 4796 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be" exitCode=1 Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.245514 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be"} Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.245936 4796 scope.go:117] "RemoveContainer" containerID="3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be" Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.257175 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79acf6d6bdb56dd13437c0961978c6db262189d85a515466b844ecea365557c4"} Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.257620 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.257652 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:34 crc kubenswrapper[4796]: I1212 04:37:34.257952 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:35 crc kubenswrapper[4796]: I1212 04:37:35.266373 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 04:37:35 crc kubenswrapper[4796]: I1212 04:37:35.266685 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"caecfdb99b37c488d5164e6856ec14e7eacfec35adbf8eca4427dff282cce8ae"} Dec 12 04:37:36 crc kubenswrapper[4796]: I1212 04:37:36.445397 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:36 crc kubenswrapper[4796]: I1212 04:37:36.445449 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:36 crc kubenswrapper[4796]: I1212 04:37:36.450789 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:39 crc kubenswrapper[4796]: I1212 04:37:39.267145 4796 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:39 crc kubenswrapper[4796]: I1212 04:37:39.286993 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:39 crc kubenswrapper[4796]: I1212 04:37:39.287022 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:39 crc kubenswrapper[4796]: I1212 04:37:39.290660 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:39 crc kubenswrapper[4796]: I1212 04:37:39.425622 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aa8cd259-963b-4aa4-879a-7fcbc9e98ffe" Dec 12 04:37:40 crc kubenswrapper[4796]: I1212 04:37:40.294631 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:40 crc kubenswrapper[4796]: I1212 04:37:40.294700 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00ebf115-3809-461f-96eb-11c9989cec7c" Dec 12 04:37:40 crc kubenswrapper[4796]: I1212 04:37:40.301399 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aa8cd259-963b-4aa4-879a-7fcbc9e98ffe" Dec 12 04:37:42 crc kubenswrapper[4796]: I1212 04:37:42.701170 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:37:42 crc kubenswrapper[4796]: I1212 04:37:42.701755 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 04:37:42 crc kubenswrapper[4796]: I1212 04:37:42.702073 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 04:37:43 crc kubenswrapper[4796]: I1212 04:37:43.509501 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:37:46 crc kubenswrapper[4796]: I1212 04:37:46.109272 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 12 04:37:46 crc kubenswrapper[4796]: I1212 04:37:46.287954 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 12 04:37:46 crc kubenswrapper[4796]: I1212 04:37:46.517167 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 12 04:37:46 crc kubenswrapper[4796]: I1212 04:37:46.739768 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 12 04:37:47 crc kubenswrapper[4796]: I1212 04:37:47.630571 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 12 04:37:47 crc kubenswrapper[4796]: I1212 04:37:47.631961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 12 04:37:48 crc kubenswrapper[4796]: I1212 04:37:48.866605 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 12 04:37:49 crc kubenswrapper[4796]: I1212 04:37:49.090353 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 04:37:49 crc kubenswrapper[4796]: I1212 04:37:49.190693 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 04:37:49 crc kubenswrapper[4796]: I1212 04:37:49.993495 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 12 04:37:50 crc kubenswrapper[4796]: I1212 04:37:50.028610 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 12 04:37:50 crc kubenswrapper[4796]: I1212 04:37:50.308233 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 12 04:37:50 crc kubenswrapper[4796]: I1212 04:37:50.376931 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 12 04:37:50 crc kubenswrapper[4796]: I1212 04:37:50.532653 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 12 04:37:50 crc kubenswrapper[4796]: I1212 04:37:50.568978 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 12 04:37:51 crc kubenswrapper[4796]: I1212 04:37:51.032182 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 12 04:37:51 crc kubenswrapper[4796]: I1212 04:37:51.141938 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 12 04:37:51 crc kubenswrapper[4796]: I1212 04:37:51.308994 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 12 04:37:51 crc kubenswrapper[4796]: I1212 04:37:51.742611 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.035752 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.444118 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.702101 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.702182 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.789543 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.837271 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 12 04:37:52 crc kubenswrapper[4796]: I1212 04:37:52.907921 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.160792 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.351354 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.519938 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.581424 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.583016 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.745634 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.812489 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.815346 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 12 04:37:53 crc kubenswrapper[4796]: I1212 04:37:53.822432 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.031383 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.255907 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.277531 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.423106 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.587628 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.636623 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.641591 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.652339 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.902170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 12 04:37:54 crc kubenswrapper[4796]: I1212 04:37:54.929599 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.066099 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.227166 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.254907 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.377909 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.414955 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.484201 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.543441 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.652164 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.670995 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.696873 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.764053 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.809245 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.828392 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.831731 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.840108 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.847849 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 12 04:37:55 crc kubenswrapper[4796]: I1212 04:37:55.959672 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.129368 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.148749 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.148753 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.198315 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.246605 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.322729 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.355628 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.445141 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.494310 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.590778 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.718851 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.723749 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.742000 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.914842 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.946708 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.950631 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.950675 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.950691 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9sbmb","openshift-marketplace/marketplace-operator-79b997595-84sk9","openshift-marketplace/redhat-marketplace-zxx8t","openshift-marketplace/community-operators-pglbm","openshift-marketplace/redhat-operators-rsk6v"] Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.950952 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9sbmb" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="registry-server" containerID="cri-o://bbc4aded6ce571ffb2e9a3ef1f0efa209c855ff4a5bb2bd671203ec6f7cfa639" gracePeriod=30 Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.952031 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rsk6v" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="registry-server" containerID="cri-o://358fe691a45bc3038e806e8e2248176dadab341b8c1b63f822ad8e8eb2e81f53" gracePeriod=30 Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.952142 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" containerID="cri-o://6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4" gracePeriod=30 Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.952422 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pglbm" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="registry-server" containerID="cri-o://b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4" gracePeriod=30 Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.952921 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zxx8t" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="registry-server" containerID="cri-o://50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807" gracePeriod=30 Dec 12 04:37:56 crc kubenswrapper[4796]: I1212 04:37:56.955937 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.007667 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.007646412 podStartE2EDuration="18.007646412s" podCreationTimestamp="2025-12-12 04:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:37:57.003267512 +0000 UTC m=+267.879284659" watchObservedRunningTime="2025-12-12 04:37:57.007646412 +0000 UTC m=+267.883663569" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.013410 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.037058 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.041936 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-84sk9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.042001 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.085550 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.120936 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.244542 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.317297 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.353050 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807 is running failed: container process not found" containerID="50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.353612 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807 is running failed: container process not found" containerID="50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.353841 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807 is running failed: container process not found" containerID="50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.353865 4796 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-zxx8t" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="registry-server" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.375694 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.414070 4796 generic.go:334] "Generic (PLEG): container finished" podID="921d55d1-a229-423a-a84f-c727ecd214a4" containerID="6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4" exitCode=0 Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.425586 4796 generic.go:334] "Generic (PLEG): container finished" podID="691c960a-4615-4c81-adba-c840acf2a99e" containerID="bbc4aded6ce571ffb2e9a3ef1f0efa209c855ff4a5bb2bd671203ec6f7cfa639" exitCode=0 Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.427432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn5hj\" (UniqueName: \"kubernetes.io/projected/a365526d-630d-4ebf-8d4f-98c944e6eee3-kube-api-access-fn5hj\") pod \"a365526d-630d-4ebf-8d4f-98c944e6eee3\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.427485 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-utilities\") pod \"a365526d-630d-4ebf-8d4f-98c944e6eee3\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.427511 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-catalog-content\") pod \"a365526d-630d-4ebf-8d4f-98c944e6eee3\" (UID: \"a365526d-630d-4ebf-8d4f-98c944e6eee3\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.427731 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.431876 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-utilities" (OuterVolumeSpecName: "utilities") pod "a365526d-630d-4ebf-8d4f-98c944e6eee3" (UID: "a365526d-630d-4ebf-8d4f-98c944e6eee3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.446930 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a365526d-630d-4ebf-8d4f-98c944e6eee3-kube-api-access-fn5hj" (OuterVolumeSpecName: "kube-api-access-fn5hj") pod "a365526d-630d-4ebf-8d4f-98c944e6eee3" (UID: "a365526d-630d-4ebf-8d4f-98c944e6eee3"). InnerVolumeSpecName "kube-api-access-fn5hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.447010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" event={"ID":"921d55d1-a229-423a-a84f-c727ecd214a4","Type":"ContainerDied","Data":"6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.447063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" event={"ID":"921d55d1-a229-423a-a84f-c727ecd214a4","Type":"ContainerDied","Data":"3c881a38c7c03b7f595e21c8f250859f8e68cd99d07d327d4b166f2475dafe1c"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.447078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerDied","Data":"bbc4aded6ce571ffb2e9a3ef1f0efa209c855ff4a5bb2bd671203ec6f7cfa639"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.447113 4796 scope.go:117] "RemoveContainer" containerID="6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.451236 4796 generic.go:334] "Generic (PLEG): container finished" podID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerID="50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807" exitCode=0 Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.451421 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerDied","Data":"50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.456137 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerID="358fe691a45bc3038e806e8e2248176dadab341b8c1b63f822ad8e8eb2e81f53" exitCode=0 Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.456212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerDied","Data":"358fe691a45bc3038e806e8e2248176dadab341b8c1b63f822ad8e8eb2e81f53"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.459569 4796 generic.go:334] "Generic (PLEG): container finished" podID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerID="b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4" exitCode=0 Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.459642 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pglbm" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.459694 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pglbm" event={"ID":"a365526d-630d-4ebf-8d4f-98c944e6eee3","Type":"ContainerDied","Data":"b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.459725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pglbm" event={"ID":"a365526d-630d-4ebf-8d4f-98c944e6eee3","Type":"ContainerDied","Data":"5f0e0151bf16991408fdde6105a296757ff6771a46744bcc8ace8b3e11f50326"} Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.472738 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.477072 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.480479 4796 scope.go:117] "RemoveContainer" containerID="6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4" Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.480980 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4\": container with ID starting with 6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4 not found: ID does not exist" containerID="6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.481015 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4"} err="failed to get container status \"6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4\": rpc error: code = NotFound desc = could not find container \"6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4\": container with ID starting with 6769788fc71708a9dfb461063f4cb63a539b0906fa91a1b307673180ce3649a4 not found: ID does not exist" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.481036 4796 scope.go:117] "RemoveContainer" containerID="b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.490194 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.493997 4796 scope.go:117] "RemoveContainer" containerID="c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.497606 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.499300 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.501875 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a365526d-630d-4ebf-8d4f-98c944e6eee3" (UID: "a365526d-630d-4ebf-8d4f-98c944e6eee3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.515212 4796 scope.go:117] "RemoveContainer" containerID="c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.528879 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-catalog-content\") pod \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.528921 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpm4m\" (UniqueName: \"kubernetes.io/projected/c30f7710-de01-48bc-8245-ef5a7a048f2e-kube-api-access-xpm4m\") pod \"c30f7710-de01-48bc-8245-ef5a7a048f2e\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.528944 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdb92\" (UniqueName: \"kubernetes.io/projected/3f1b12ad-66a0-46a1-a64e-54bd3a294549-kube-api-access-tdb92\") pod \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.528965 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-operator-metrics\") pod \"921d55d1-a229-423a-a84f-c727ecd214a4\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.528998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfk7f\" (UniqueName: \"kubernetes.io/projected/691c960a-4615-4c81-adba-c840acf2a99e-kube-api-access-lfk7f\") pod \"691c960a-4615-4c81-adba-c840acf2a99e\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-utilities\") pod \"c30f7710-de01-48bc-8245-ef5a7a048f2e\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-utilities\") pod \"691c960a-4615-4c81-adba-c840acf2a99e\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529122 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlw8\" (UniqueName: \"kubernetes.io/projected/921d55d1-a229-423a-a84f-c727ecd214a4-kube-api-access-rtlw8\") pod \"921d55d1-a229-423a-a84f-c727ecd214a4\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529144 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-catalog-content\") pod \"c30f7710-de01-48bc-8245-ef5a7a048f2e\" (UID: \"c30f7710-de01-48bc-8245-ef5a7a048f2e\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529170 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-catalog-content\") pod \"691c960a-4615-4c81-adba-c840acf2a99e\" (UID: \"691c960a-4615-4c81-adba-c840acf2a99e\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-utilities\") pod \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\" (UID: \"3f1b12ad-66a0-46a1-a64e-54bd3a294549\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529215 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-trusted-ca\") pod \"921d55d1-a229-423a-a84f-c727ecd214a4\" (UID: \"921d55d1-a229-423a-a84f-c727ecd214a4\") " Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529396 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn5hj\" (UniqueName: \"kubernetes.io/projected/a365526d-630d-4ebf-8d4f-98c944e6eee3-kube-api-access-fn5hj\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529408 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529416 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a365526d-630d-4ebf-8d4f-98c944e6eee3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.529999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "921d55d1-a229-423a-a84f-c727ecd214a4" (UID: "921d55d1-a229-423a-a84f-c727ecd214a4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.531951 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-utilities" (OuterVolumeSpecName: "utilities") pod "691c960a-4615-4c81-adba-c840acf2a99e" (UID: "691c960a-4615-4c81-adba-c840acf2a99e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.533889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-utilities" (OuterVolumeSpecName: "utilities") pod "3f1b12ad-66a0-46a1-a64e-54bd3a294549" (UID: "3f1b12ad-66a0-46a1-a64e-54bd3a294549"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.535540 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-utilities" (OuterVolumeSpecName: "utilities") pod "c30f7710-de01-48bc-8245-ef5a7a048f2e" (UID: "c30f7710-de01-48bc-8245-ef5a7a048f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.538519 4796 scope.go:117] "RemoveContainer" containerID="b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.538825 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921d55d1-a229-423a-a84f-c727ecd214a4-kube-api-access-rtlw8" (OuterVolumeSpecName: "kube-api-access-rtlw8") pod "921d55d1-a229-423a-a84f-c727ecd214a4" (UID: "921d55d1-a229-423a-a84f-c727ecd214a4"). InnerVolumeSpecName "kube-api-access-rtlw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.538927 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4\": container with ID starting with b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4 not found: ID does not exist" containerID="b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.538964 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4"} err="failed to get container status \"b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4\": rpc error: code = NotFound desc = could not find container \"b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4\": container with ID starting with b927384fb5d431b7f85e89d27cb243dac111620ea05645ad41dd65f2ed7a78e4 not found: ID does not exist" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.538990 4796 scope.go:117] "RemoveContainer" containerID="c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.538938 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30f7710-de01-48bc-8245-ef5a7a048f2e-kube-api-access-xpm4m" (OuterVolumeSpecName: "kube-api-access-xpm4m") pod "c30f7710-de01-48bc-8245-ef5a7a048f2e" (UID: "c30f7710-de01-48bc-8245-ef5a7a048f2e"). InnerVolumeSpecName "kube-api-access-xpm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.539463 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82\": container with ID starting with c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82 not found: ID does not exist" containerID="c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.539489 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82"} err="failed to get container status \"c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82\": rpc error: code = NotFound desc = could not find container \"c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82\": container with ID starting with c4ba376cdf08329c274e8984f0787041e39f7b14375ac5397225bcc51ff92a82 not found: ID does not exist" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.539509 4796 scope.go:117] "RemoveContainer" containerID="c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf" Dec 12 04:37:57 crc kubenswrapper[4796]: E1212 04:37:57.539907 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf\": container with ID starting with c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf not found: ID does not exist" containerID="c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.540016 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf"} err="failed to get container status \"c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf\": rpc error: code = NotFound desc = could not find container \"c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf\": container with ID starting with c6c31e793b4e1dde7efb091fe136db2cf8281b96ddbf6734a6e2553a1352a1cf not found: ID does not exist" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.545833 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.546148 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691c960a-4615-4c81-adba-c840acf2a99e-kube-api-access-lfk7f" (OuterVolumeSpecName: "kube-api-access-lfk7f") pod "691c960a-4615-4c81-adba-c840acf2a99e" (UID: "691c960a-4615-4c81-adba-c840acf2a99e"). InnerVolumeSpecName "kube-api-access-lfk7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.548175 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "921d55d1-a229-423a-a84f-c727ecd214a4" (UID: "921d55d1-a229-423a-a84f-c727ecd214a4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.562490 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.566742 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1b12ad-66a0-46a1-a64e-54bd3a294549-kube-api-access-tdb92" (OuterVolumeSpecName: "kube-api-access-tdb92") pod "3f1b12ad-66a0-46a1-a64e-54bd3a294549" (UID: "3f1b12ad-66a0-46a1-a64e-54bd3a294549"). InnerVolumeSpecName "kube-api-access-tdb92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.568630 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c30f7710-de01-48bc-8245-ef5a7a048f2e" (UID: "c30f7710-de01-48bc-8245-ef5a7a048f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.582970 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "691c960a-4615-4c81-adba-c840acf2a99e" (UID: "691c960a-4615-4c81-adba-c840acf2a99e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.584479 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630415 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630443 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtlw8\" (UniqueName: \"kubernetes.io/projected/921d55d1-a229-423a-a84f-c727ecd214a4-kube-api-access-rtlw8\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630454 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630462 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c960a-4615-4c81-adba-c840acf2a99e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630473 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630483 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630494 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpm4m\" (UniqueName: \"kubernetes.io/projected/c30f7710-de01-48bc-8245-ef5a7a048f2e-kube-api-access-xpm4m\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630506 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/921d55d1-a229-423a-a84f-c727ecd214a4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630518 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdb92\" (UniqueName: \"kubernetes.io/projected/3f1b12ad-66a0-46a1-a64e-54bd3a294549-kube-api-access-tdb92\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630528 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfk7f\" (UniqueName: \"kubernetes.io/projected/691c960a-4615-4c81-adba-c840acf2a99e-kube-api-access-lfk7f\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.630536 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c30f7710-de01-48bc-8245-ef5a7a048f2e-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.652353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f1b12ad-66a0-46a1-a64e-54bd3a294549" (UID: "3f1b12ad-66a0-46a1-a64e-54bd3a294549"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.708260 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.729834 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.731747 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1b12ad-66a0-46a1-a64e-54bd3a294549-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.769363 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.783701 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pglbm"] Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.787269 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pglbm"] Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.800681 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.812874 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.836437 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.862171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.931077 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.975907 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 12 04:37:57 crc kubenswrapper[4796]: I1212 04:37:57.978027 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.052232 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.099174 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.102786 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.172165 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.182690 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.188062 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.265800 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.283213 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.285686 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.344002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.366441 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.383163 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.410469 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.439518 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.442569 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.471855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsk6v" event={"ID":"3f1b12ad-66a0-46a1-a64e-54bd3a294549","Type":"ContainerDied","Data":"6520c12ea99b3a204deffa3816b6ce2bd5a3b4a3ede46c6ae09ec52c23a16772"} Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.471904 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsk6v" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.471910 4796 scope.go:117] "RemoveContainer" containerID="358fe691a45bc3038e806e8e2248176dadab341b8c1b63f822ad8e8eb2e81f53" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.484378 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-84sk9" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.493419 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zxx8t" event={"ID":"c30f7710-de01-48bc-8245-ef5a7a048f2e","Type":"ContainerDied","Data":"6ed57ae92aa7b2f7561f3ca9e89889edb70e56dcbefc067dfe8d885961361299"} Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.493579 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zxx8t" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.498827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9sbmb" event={"ID":"691c960a-4615-4c81-adba-c840acf2a99e","Type":"ContainerDied","Data":"21a47f7146d6d50c4b593b6143a1812f19922028241e890be2441de1f9f299d0"} Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.498961 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9sbmb" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.513114 4796 scope.go:117] "RemoveContainer" containerID="6480c93ba0ac39ddd114eb77a24d50347f49853d5e95fcc2068fbc2fadd2110e" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.543561 4796 scope.go:117] "RemoveContainer" containerID="349710e3c0fe44c86a19f2212c3e0e06b294904d7ee341f20fe9eb480c19f0c5" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.544846 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsk6v"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.551434 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rsk6v"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.568724 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-84sk9"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.569317 4796 scope.go:117] "RemoveContainer" containerID="50559dab8fb423a628d5b015cd4ad2b28703eb781a9fac173a3160cfba237807" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.571739 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-84sk9"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.575329 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9sbmb"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.579335 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9sbmb"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.579833 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.584351 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxx8t"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.587147 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zxx8t"] Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.588367 4796 scope.go:117] "RemoveContainer" containerID="16adfbde581d06d9c70a1d365ec2ba72100abcb5341c86d83d863960ff0cd1ac" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.599438 4796 scope.go:117] "RemoveContainer" containerID="2fb015eeb0e0291ef576c54b561a0db0080470ac529e67f106b39537f76e1f9b" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.609307 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.615642 4796 scope.go:117] "RemoveContainer" containerID="bbc4aded6ce571ffb2e9a3ef1f0efa209c855ff4a5bb2bd671203ec6f7cfa639" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.632361 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.632598 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.635216 4796 scope.go:117] "RemoveContainer" containerID="5621fddc37285dc0608caf9599902041447d0ad4ee9db0485ffa1333434d37a2" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.646248 4796 scope.go:117] "RemoveContainer" containerID="a77dab67d222739362c775f10e135d6fa482ceb8ac37306fad9d02fc132ffd75" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.666145 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.731149 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.740147 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.774881 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.830598 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.889520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.936846 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 04:37:58 crc kubenswrapper[4796]: I1212 04:37:58.967704 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.051211 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.111614 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.125123 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.137042 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.145905 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.168492 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.199878 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.200297 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.307857 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.422970 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" path="/var/lib/kubelet/pods/3f1b12ad-66a0-46a1-a64e-54bd3a294549/volumes" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.424921 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691c960a-4615-4c81-adba-c840acf2a99e" path="/var/lib/kubelet/pods/691c960a-4615-4c81-adba-c840acf2a99e/volumes" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.426704 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" path="/var/lib/kubelet/pods/921d55d1-a229-423a-a84f-c727ecd214a4/volumes" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.429003 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" path="/var/lib/kubelet/pods/a365526d-630d-4ebf-8d4f-98c944e6eee3/volumes" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.430463 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" path="/var/lib/kubelet/pods/c30f7710-de01-48bc-8245-ef5a7a048f2e/volumes" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.629492 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.650013 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.668814 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.670971 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.738963 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.759723 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.787700 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.798859 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.858448 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.906567 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.938388 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 12 04:37:59 crc kubenswrapper[4796]: I1212 04:37:59.980184 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.008894 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.027011 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.153748 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.226760 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.226899 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.276709 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.288348 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.313061 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.471579 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.534964 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.567803 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.592869 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.596431 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.639524 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.663884 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.691262 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.831499 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.873842 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.926823 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 12 04:38:00 crc kubenswrapper[4796]: I1212 04:38:00.949758 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.087828 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.163730 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.187185 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.261360 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.327356 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.339116 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.444808 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.495949 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.605317 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.642109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.655652 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.724835 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.725246 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd" gracePeriod=5 Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.780172 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.864813 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.896374 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.912383 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.917148 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.917905 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 12 04:38:01 crc kubenswrapper[4796]: I1212 04:38:01.918725 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.016609 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.126176 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.132606 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.284402 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.285482 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.483786 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.543003 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.579402 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.625599 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.701697 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.702826 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.702953 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.703531 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"caecfdb99b37c488d5164e6856ec14e7eacfec35adbf8eca4427dff282cce8ae"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.703727 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://caecfdb99b37c488d5164e6856ec14e7eacfec35adbf8eca4427dff282cce8ae" gracePeriod=30 Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.848531 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.849220 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.907512 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 12 04:38:02 crc kubenswrapper[4796]: I1212 04:38:02.944004 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.054117 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.072873 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.073530 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.173784 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.273374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.296055 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.303664 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.423965 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.437441 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.603851 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.686625 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.750633 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.773136 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.797804 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 12 04:38:03 crc kubenswrapper[4796]: I1212 04:38:03.991337 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.122431 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.165473 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.176680 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.275106 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.294562 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.381557 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.715795 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.717800 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.804219 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 12 04:38:04 crc kubenswrapper[4796]: I1212 04:38:04.830039 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.020575 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.074853 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.089134 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.166774 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.232353 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.246383 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.271965 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.357310 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.466481 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.491646 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.535069 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.577839 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.649563 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.662180 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.786373 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 12 04:38:05 crc kubenswrapper[4796]: I1212 04:38:05.790501 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.092913 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.165017 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.273872 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.488359 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.502506 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.546049 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.663778 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.844403 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.869123 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 04:38:06 crc kubenswrapper[4796]: I1212 04:38:06.989617 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.165387 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.296757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.313889 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.313962 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.450899 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.450983 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451038 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451112 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451026 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451062 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451111 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451181 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451254 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451524 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451555 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451573 4796 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.451590 4796 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.458321 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.545529 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.545567 4796 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd" exitCode=137 Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.545604 4796 scope.go:117] "RemoveContainer" containerID="871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.545692 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.553696 4796 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.563719 4796 scope.go:117] "RemoveContainer" containerID="871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd" Dec 12 04:38:07 crc kubenswrapper[4796]: E1212 04:38:07.564752 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd\": container with ID starting with 871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd not found: ID does not exist" containerID="871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd" Dec 12 04:38:07 crc kubenswrapper[4796]: I1212 04:38:07.564809 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd"} err="failed to get container status \"871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd\": rpc error: code = NotFound desc = could not find container \"871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd\": container with ID starting with 871a180a7565cc8d42e05ba8414821d871f7602fe1ca42b28504ce55cd835efd not found: ID does not exist" Dec 12 04:38:09 crc kubenswrapper[4796]: I1212 04:38:09.420082 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692163 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2krf"] Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692899 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692912 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692920 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692926 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692933 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692939 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692952 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692957 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692968 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692974 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692983 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.692989 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.692998 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693003 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693009 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693014 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693024 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693029 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693036 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693041 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693049 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693054 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693063 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693069 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="extract-content" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693076 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c831a3f-3013-431c-a451-b853f8162e02" containerName="installer" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693081 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c831a3f-3013-431c-a451-b853f8162e02" containerName="installer" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693088 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693094 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: E1212 04:38:20.693102 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693107 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="extract-utilities" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693195 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1b12ad-66a0-46a1-a64e-54bd3a294549" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693205 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="691c960a-4615-4c81-adba-c840acf2a99e" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693213 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c30f7710-de01-48bc-8245-ef5a7a048f2e" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693221 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a365526d-630d-4ebf-8d4f-98c944e6eee3" containerName="registry-server" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693234 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="921d55d1-a229-423a-a84f-c727ecd214a4" containerName="marketplace-operator" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693241 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693248 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c831a3f-3013-431c-a451-b853f8162e02" containerName="installer" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.693870 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.695897 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.696848 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.696906 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.710984 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2krf"] Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.730180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1335c04-c002-4da2-af48-7b5cd6910c27-utilities\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.730481 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1335c04-c002-4da2-af48-7b5cd6910c27-catalog-content\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.730583 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g4xh\" (UniqueName: \"kubernetes.io/projected/d1335c04-c002-4da2-af48-7b5cd6910c27-kube-api-access-7g4xh\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.832186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1335c04-c002-4da2-af48-7b5cd6910c27-utilities\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.832368 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1335c04-c002-4da2-af48-7b5cd6910c27-catalog-content\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.832469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g4xh\" (UniqueName: \"kubernetes.io/projected/d1335c04-c002-4da2-af48-7b5cd6910c27-kube-api-access-7g4xh\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.832890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1335c04-c002-4da2-af48-7b5cd6910c27-catalog-content\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.833224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1335c04-c002-4da2-af48-7b5cd6910c27-utilities\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.866280 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g4xh\" (UniqueName: \"kubernetes.io/projected/d1335c04-c002-4da2-af48-7b5cd6910c27-kube-api-access-7g4xh\") pod \"redhat-operators-k2krf\" (UID: \"d1335c04-c002-4da2-af48-7b5cd6910c27\") " pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.897072 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lg6bx"] Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.899432 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.909626 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lg6bx"] Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.912796 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.933093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcf179a6-7e07-4022-a119-a9b97737e0db-utilities\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.933182 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfpw5\" (UniqueName: \"kubernetes.io/projected/dcf179a6-7e07-4022-a119-a9b97737e0db-kube-api-access-bfpw5\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:20 crc kubenswrapper[4796]: I1212 04:38:20.933207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcf179a6-7e07-4022-a119-a9b97737e0db-catalog-content\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.019649 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.033789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfpw5\" (UniqueName: \"kubernetes.io/projected/dcf179a6-7e07-4022-a119-a9b97737e0db-kube-api-access-bfpw5\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.033849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcf179a6-7e07-4022-a119-a9b97737e0db-catalog-content\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.033915 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcf179a6-7e07-4022-a119-a9b97737e0db-utilities\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.034832 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcf179a6-7e07-4022-a119-a9b97737e0db-catalog-content\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.034928 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcf179a6-7e07-4022-a119-a9b97737e0db-utilities\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.071189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfpw5\" (UniqueName: \"kubernetes.io/projected/dcf179a6-7e07-4022-a119-a9b97737e0db-kube-api-access-bfpw5\") pod \"certified-operators-lg6bx\" (UID: \"dcf179a6-7e07-4022-a119-a9b97737e0db\") " pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.233857 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.277075 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2krf"] Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.425901 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lg6bx"] Dec 12 04:38:21 crc kubenswrapper[4796]: W1212 04:38:21.467716 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcf179a6_7e07_4022_a119_a9b97737e0db.slice/crio-646662e6de1cf383dd05296dfa4de1daef45cbcd7c578c7da2dbc60db06f7533 WatchSource:0}: Error finding container 646662e6de1cf383dd05296dfa4de1daef45cbcd7c578c7da2dbc60db06f7533: Status 404 returned error can't find the container with id 646662e6de1cf383dd05296dfa4de1daef45cbcd7c578c7da2dbc60db06f7533 Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.634667 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1335c04-c002-4da2-af48-7b5cd6910c27" containerID="8ac28c0c588a15eb7c06bbe86d9a53677de568006704a50172d9c40b9f220cf0" exitCode=0 Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.634728 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2krf" event={"ID":"d1335c04-c002-4da2-af48-7b5cd6910c27","Type":"ContainerDied","Data":"8ac28c0c588a15eb7c06bbe86d9a53677de568006704a50172d9c40b9f220cf0"} Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.634754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2krf" event={"ID":"d1335c04-c002-4da2-af48-7b5cd6910c27","Type":"ContainerStarted","Data":"17b5f1f8e1066505b1e5c6cbd1596a7b51f2b24dfe881909cdd7bb4f2af256a1"} Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.640973 4796 generic.go:334] "Generic (PLEG): container finished" podID="dcf179a6-7e07-4022-a119-a9b97737e0db" containerID="88e197f6a08c7e1304b789c7202460b58cfa84b5b24103e4cef713ec76cc6ea6" exitCode=0 Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.641010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg6bx" event={"ID":"dcf179a6-7e07-4022-a119-a9b97737e0db","Type":"ContainerDied","Data":"88e197f6a08c7e1304b789c7202460b58cfa84b5b24103e4cef713ec76cc6ea6"} Dec 12 04:38:21 crc kubenswrapper[4796]: I1212 04:38:21.641032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg6bx" event={"ID":"dcf179a6-7e07-4022-a119-a9b97737e0db","Type":"ContainerStarted","Data":"646662e6de1cf383dd05296dfa4de1daef45cbcd7c578c7da2dbc60db06f7533"} Dec 12 04:38:22 crc kubenswrapper[4796]: I1212 04:38:22.651507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2krf" event={"ID":"d1335c04-c002-4da2-af48-7b5cd6910c27","Type":"ContainerStarted","Data":"e6d918676e8baa82616ab92dd229736b2d347cb636b81d893aaa154444a694cc"} Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.091027 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bbk7"] Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.092247 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.095128 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.101474 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bbk7"] Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.256660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmd4v\" (UniqueName: \"kubernetes.io/projected/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-kube-api-access-qmd4v\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.256739 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-catalog-content\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.256759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-utilities\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.285716 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9w78"] Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.286613 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.288269 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.297107 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9w78"] Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.358195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-catalog-content\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.358240 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-utilities\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.358532 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmd4v\" (UniqueName: \"kubernetes.io/projected/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-kube-api-access-qmd4v\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.358667 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-utilities\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.358699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-catalog-content\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.376614 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmd4v\" (UniqueName: \"kubernetes.io/projected/bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb-kube-api-access-qmd4v\") pod \"community-operators-5bbk7\" (UID: \"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb\") " pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.439516 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.459248 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ae0f48-17be-4f69-a5c0-bf9c72205b24-catalog-content\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.459330 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjnr\" (UniqueName: \"kubernetes.io/projected/21ae0f48-17be-4f69-a5c0-bf9c72205b24-kube-api-access-xxjnr\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.459394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ae0f48-17be-4f69-a5c0-bf9c72205b24-utilities\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.560274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjnr\" (UniqueName: \"kubernetes.io/projected/21ae0f48-17be-4f69-a5c0-bf9c72205b24-kube-api-access-xxjnr\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.560383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ae0f48-17be-4f69-a5c0-bf9c72205b24-utilities\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.560421 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ae0f48-17be-4f69-a5c0-bf9c72205b24-catalog-content\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.561253 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ae0f48-17be-4f69-a5c0-bf9c72205b24-catalog-content\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.562166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ae0f48-17be-4f69-a5c0-bf9c72205b24-utilities\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.587809 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjnr\" (UniqueName: \"kubernetes.io/projected/21ae0f48-17be-4f69-a5c0-bf9c72205b24-kube-api-access-xxjnr\") pod \"redhat-marketplace-z9w78\" (UID: \"21ae0f48-17be-4f69-a5c0-bf9c72205b24\") " pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.601094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.630894 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bbk7"] Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.657983 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1335c04-c002-4da2-af48-7b5cd6910c27" containerID="e6d918676e8baa82616ab92dd229736b2d347cb636b81d893aaa154444a694cc" exitCode=0 Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.658032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2krf" event={"ID":"d1335c04-c002-4da2-af48-7b5cd6910c27","Type":"ContainerDied","Data":"e6d918676e8baa82616ab92dd229736b2d347cb636b81d893aaa154444a694cc"} Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.664074 4796 generic.go:334] "Generic (PLEG): container finished" podID="dcf179a6-7e07-4022-a119-a9b97737e0db" containerID="e833c4550b92fee9372345d063e12420a653bf66479c03427950429102b3a8cd" exitCode=0 Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.664123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg6bx" event={"ID":"dcf179a6-7e07-4022-a119-a9b97737e0db","Type":"ContainerDied","Data":"e833c4550b92fee9372345d063e12420a653bf66479c03427950429102b3a8cd"} Dec 12 04:38:23 crc kubenswrapper[4796]: I1212 04:38:23.670258 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bbk7" event={"ID":"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb","Type":"ContainerStarted","Data":"51f2a70bfc79d1404225313546717ebb9c8f989659eeb88940274a6e22c15cf4"} Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.042343 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9w78"] Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.678235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg6bx" event={"ID":"dcf179a6-7e07-4022-a119-a9b97737e0db","Type":"ContainerStarted","Data":"5ad6b0c76a9f4e3baaf1afacdb1d50a793d3dd83276c6eaf1621d6b5cb14c119"} Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.680440 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb" containerID="a584fcb66d2bddc231d79e24566fb9cf335d7894a8ececaaa4ba8097a9a53144" exitCode=0 Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.680499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bbk7" event={"ID":"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb","Type":"ContainerDied","Data":"a584fcb66d2bddc231d79e24566fb9cf335d7894a8ececaaa4ba8097a9a53144"} Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.682529 4796 generic.go:334] "Generic (PLEG): container finished" podID="21ae0f48-17be-4f69-a5c0-bf9c72205b24" containerID="5c219b9a4722b9827f10886433be7fc25b805a4643d585246160647f95ce567c" exitCode=0 Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.682568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9w78" event={"ID":"21ae0f48-17be-4f69-a5c0-bf9c72205b24","Type":"ContainerDied","Data":"5c219b9a4722b9827f10886433be7fc25b805a4643d585246160647f95ce567c"} Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.682585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9w78" event={"ID":"21ae0f48-17be-4f69-a5c0-bf9c72205b24","Type":"ContainerStarted","Data":"786d1cecbbb3c4eef2661f56eae939d1b7da80d338548c38ce8caa3a58568e61"} Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.686271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2krf" event={"ID":"d1335c04-c002-4da2-af48-7b5cd6910c27","Type":"ContainerStarted","Data":"95ace58b4284e0e55ddb4a6272d792e3b6a8180e20f1f9b781ee1e188c662ef0"} Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.697893 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lg6bx" podStartSLOduration=2.240283512 podStartE2EDuration="4.697874133s" podCreationTimestamp="2025-12-12 04:38:20 +0000 UTC" firstStartedPulling="2025-12-12 04:38:21.642316844 +0000 UTC m=+292.518333991" lastFinishedPulling="2025-12-12 04:38:24.099907465 +0000 UTC m=+294.975924612" observedRunningTime="2025-12-12 04:38:24.693227775 +0000 UTC m=+295.569244952" watchObservedRunningTime="2025-12-12 04:38:24.697874133 +0000 UTC m=+295.573891290" Dec 12 04:38:24 crc kubenswrapper[4796]: I1212 04:38:24.713951 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2krf" podStartSLOduration=2.287578491 podStartE2EDuration="4.713933842s" podCreationTimestamp="2025-12-12 04:38:20 +0000 UTC" firstStartedPulling="2025-12-12 04:38:21.636865931 +0000 UTC m=+292.512883078" lastFinishedPulling="2025-12-12 04:38:24.063221282 +0000 UTC m=+294.939238429" observedRunningTime="2025-12-12 04:38:24.711764663 +0000 UTC m=+295.587781830" watchObservedRunningTime="2025-12-12 04:38:24.713933842 +0000 UTC m=+295.589950999" Dec 12 04:38:25 crc kubenswrapper[4796]: I1212 04:38:25.692626 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb" containerID="213fc935f3ed846daed925672035eb4db893e17fa8a7006b4ca5673c882928a7" exitCode=0 Dec 12 04:38:25 crc kubenswrapper[4796]: I1212 04:38:25.692714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bbk7" event={"ID":"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb","Type":"ContainerDied","Data":"213fc935f3ed846daed925672035eb4db893e17fa8a7006b4ca5673c882928a7"} Dec 12 04:38:25 crc kubenswrapper[4796]: I1212 04:38:25.695368 4796 generic.go:334] "Generic (PLEG): container finished" podID="21ae0f48-17be-4f69-a5c0-bf9c72205b24" containerID="79c227290477cd3d6c9e6581a9269d644d09f27ab40d576a2c71114180d65041" exitCode=0 Dec 12 04:38:25 crc kubenswrapper[4796]: I1212 04:38:25.695417 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9w78" event={"ID":"21ae0f48-17be-4f69-a5c0-bf9c72205b24","Type":"ContainerDied","Data":"79c227290477cd3d6c9e6581a9269d644d09f27ab40d576a2c71114180d65041"} Dec 12 04:38:26 crc kubenswrapper[4796]: I1212 04:38:26.701946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bbk7" event={"ID":"bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb","Type":"ContainerStarted","Data":"f46d524669fd7b8b140279813239d03652d600fb286810ab1cbcfb5d781219fd"} Dec 12 04:38:26 crc kubenswrapper[4796]: I1212 04:38:26.704041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9w78" event={"ID":"21ae0f48-17be-4f69-a5c0-bf9c72205b24","Type":"ContainerStarted","Data":"4e8295442ccf1ce47d4bb0e7de54cf6ba581598f4cf4db953bc46f432cda395b"} Dec 12 04:38:26 crc kubenswrapper[4796]: I1212 04:38:26.722272 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bbk7" podStartSLOduration=1.987017904 podStartE2EDuration="3.722259594s" podCreationTimestamp="2025-12-12 04:38:23 +0000 UTC" firstStartedPulling="2025-12-12 04:38:24.683015951 +0000 UTC m=+295.559033098" lastFinishedPulling="2025-12-12 04:38:26.418257611 +0000 UTC m=+297.294274788" observedRunningTime="2025-12-12 04:38:26.719037902 +0000 UTC m=+297.595055049" watchObservedRunningTime="2025-12-12 04:38:26.722259594 +0000 UTC m=+297.598276741" Dec 12 04:38:26 crc kubenswrapper[4796]: I1212 04:38:26.748492 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9w78" podStartSLOduration=2.0035691 podStartE2EDuration="3.748476065s" podCreationTimestamp="2025-12-12 04:38:23 +0000 UTC" firstStartedPulling="2025-12-12 04:38:24.683935801 +0000 UTC m=+295.559952948" lastFinishedPulling="2025-12-12 04:38:26.428842766 +0000 UTC m=+297.304859913" observedRunningTime="2025-12-12 04:38:26.733763308 +0000 UTC m=+297.609780445" watchObservedRunningTime="2025-12-12 04:38:26.748476065 +0000 UTC m=+297.624493212" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.264045 4796 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.314220 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sng6c"] Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.315067 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.317900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.318108 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.335564 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.341640 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sng6c"] Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.362400 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4mwh"] Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.362904 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" podUID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" containerName="controller-manager" containerID="cri-o://8cdb7f6f8f311e88187a767e3d8df8dd438551736c52aa30316f6ac1c335bb8a" gracePeriod=30 Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.445601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05678928-a7d3-4250-8454-abadf034f217-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.445729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkcx\" (UniqueName: \"kubernetes.io/projected/05678928-a7d3-4250-8454-abadf034f217-kube-api-access-njkcx\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.445771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05678928-a7d3-4250-8454-abadf034f217-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.477307 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c"] Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.477551 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" podUID="cd42823a-83a6-4a22-bec9-8cd20753bdb1" containerName="route-controller-manager" containerID="cri-o://9b0ece05345c858b8b63c8f3b002b353e9a738b2bcfe4664988a779255a3d4c2" gracePeriod=30 Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.546853 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkcx\" (UniqueName: \"kubernetes.io/projected/05678928-a7d3-4250-8454-abadf034f217-kube-api-access-njkcx\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.547052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05678928-a7d3-4250-8454-abadf034f217-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.547446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05678928-a7d3-4250-8454-abadf034f217-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.548524 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05678928-a7d3-4250-8454-abadf034f217-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.555613 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.588328 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05678928-a7d3-4250-8454-abadf034f217-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.614919 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkcx\" (UniqueName: \"kubernetes.io/projected/05678928-a7d3-4250-8454-abadf034f217-kube-api-access-njkcx\") pod \"marketplace-operator-79b997595-sng6c\" (UID: \"05678928-a7d3-4250-8454-abadf034f217\") " pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.651729 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.657391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:29 crc kubenswrapper[4796]: I1212 04:38:29.984576 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sng6c"] Dec 12 04:38:30 crc kubenswrapper[4796]: I1212 04:38:30.730400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" event={"ID":"05678928-a7d3-4250-8454-abadf034f217","Type":"ContainerStarted","Data":"055426b827376674a67428a2e9abb95b9f82dc4b63d2b870da3a6ecd94e635b9"} Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.020105 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.020167 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.067816 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.234620 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.235452 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.273354 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.794426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lg6bx" Dec 12 04:38:31 crc kubenswrapper[4796]: I1212 04:38:31.853833 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2krf" Dec 12 04:38:32 crc kubenswrapper[4796]: I1212 04:38:32.739846 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" event={"ID":"05678928-a7d3-4250-8454-abadf034f217","Type":"ContainerStarted","Data":"d9bd58334d5f0060568663fed81e1e3abac8ae23ba27da95518dd644e490f02c"} Dec 12 04:38:32 crc kubenswrapper[4796]: I1212 04:38:32.758845 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" podStartSLOduration=3.758828716 podStartE2EDuration="3.758828716s" podCreationTimestamp="2025-12-12 04:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:38:32.756328807 +0000 UTC m=+303.632345964" watchObservedRunningTime="2025-12-12 04:38:32.758828716 +0000 UTC m=+303.634845863" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.440381 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.441700 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.483789 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.602134 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.602312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.643606 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.746713 4796 generic.go:334] "Generic (PLEG): container finished" podID="cd42823a-83a6-4a22-bec9-8cd20753bdb1" containerID="9b0ece05345c858b8b63c8f3b002b353e9a738b2bcfe4664988a779255a3d4c2" exitCode=0 Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.746792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" event={"ID":"cd42823a-83a6-4a22-bec9-8cd20753bdb1","Type":"ContainerDied","Data":"9b0ece05345c858b8b63c8f3b002b353e9a738b2bcfe4664988a779255a3d4c2"} Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.748266 4796 generic.go:334] "Generic (PLEG): container finished" podID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" containerID="8cdb7f6f8f311e88187a767e3d8df8dd438551736c52aa30316f6ac1c335bb8a" exitCode=0 Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.748340 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" event={"ID":"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c","Type":"ContainerDied","Data":"8cdb7f6f8f311e88187a767e3d8df8dd438551736c52aa30316f6ac1c335bb8a"} Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.749958 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.751541 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.751591 4796 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="caecfdb99b37c488d5164e6856ec14e7eacfec35adbf8eca4427dff282cce8ae" exitCode=137 Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.752220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"caecfdb99b37c488d5164e6856ec14e7eacfec35adbf8eca4427dff282cce8ae"} Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.752258 4796 scope.go:117] "RemoveContainer" containerID="3fdfd069498bb542bacaf7973fce216bec3f1db6edbe7145c4b5ecf15b2de1be" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.752949 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.763424 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sng6c" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.792340 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9w78" Dec 12 04:38:33 crc kubenswrapper[4796]: I1212 04:38:33.807959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bbk7" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.067604 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.109237 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-857459b668-p4r5j"] Dec 12 04:38:34 crc kubenswrapper[4796]: E1212 04:38:34.109522 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" containerName="controller-manager" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.109546 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" containerName="controller-manager" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.109654 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" containerName="controller-manager" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.113029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.122170 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857459b668-p4r5j"] Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.206573 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-config\") pod \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.206639 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57tb6\" (UniqueName: \"kubernetes.io/projected/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-kube-api-access-57tb6\") pod \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.206691 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-serving-cert\") pod \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.206727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-proxy-ca-bundles\") pod \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.206764 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-client-ca\") pod \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\" (UID: \"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.207568 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-client-ca" (OuterVolumeSpecName: "client-ca") pod "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" (UID: "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.207659 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-config" (OuterVolumeSpecName: "config") pod "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" (UID: "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.208496 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" (UID: "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.209233 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.212725 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" (UID: "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.212808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-kube-api-access-57tb6" (OuterVolumeSpecName: "kube-api-access-57tb6") pod "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" (UID: "fa0261aa-ad16-4189-aaf8-6aacb68d1f1c"). InnerVolumeSpecName "kube-api-access-57tb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.307830 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd42823a-83a6-4a22-bec9-8cd20753bdb1-serving-cert\") pod \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.307925 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5v6p\" (UniqueName: \"kubernetes.io/projected/cd42823a-83a6-4a22-bec9-8cd20753bdb1-kube-api-access-h5v6p\") pod \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.309428 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-config\") pod \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.309519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-client-ca\") pod \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\" (UID: \"cd42823a-83a6-4a22-bec9-8cd20753bdb1\") " Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.309776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-serving-cert\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.309832 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddxvp\" (UniqueName: \"kubernetes.io/projected/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-kube-api-access-ddxvp\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-config\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310069 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-config" (OuterVolumeSpecName: "config") pod "cd42823a-83a6-4a22-bec9-8cd20753bdb1" (UID: "cd42823a-83a6-4a22-bec9-8cd20753bdb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310105 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-client-ca" (OuterVolumeSpecName: "client-ca") pod "cd42823a-83a6-4a22-bec9-8cd20753bdb1" (UID: "cd42823a-83a6-4a22-bec9-8cd20753bdb1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310270 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-client-ca\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310333 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-proxy-ca-bundles\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310397 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310412 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310427 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57tb6\" (UniqueName: \"kubernetes.io/projected/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-kube-api-access-57tb6\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310440 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310451 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310461 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.310473 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd42823a-83a6-4a22-bec9-8cd20753bdb1-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.312884 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd42823a-83a6-4a22-bec9-8cd20753bdb1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cd42823a-83a6-4a22-bec9-8cd20753bdb1" (UID: "cd42823a-83a6-4a22-bec9-8cd20753bdb1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.313612 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd42823a-83a6-4a22-bec9-8cd20753bdb1-kube-api-access-h5v6p" (OuterVolumeSpecName: "kube-api-access-h5v6p") pod "cd42823a-83a6-4a22-bec9-8cd20753bdb1" (UID: "cd42823a-83a6-4a22-bec9-8cd20753bdb1"). InnerVolumeSpecName "kube-api-access-h5v6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.411615 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-client-ca\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.411702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-proxy-ca-bundles\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.411722 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-serving-cert\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.411745 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddxvp\" (UniqueName: \"kubernetes.io/projected/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-kube-api-access-ddxvp\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.412034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-config\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.412118 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd42823a-83a6-4a22-bec9-8cd20753bdb1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.412132 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5v6p\" (UniqueName: \"kubernetes.io/projected/cd42823a-83a6-4a22-bec9-8cd20753bdb1-kube-api-access-h5v6p\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.413041 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-client-ca\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.414742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-proxy-ca-bundles\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.415948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-config\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.421195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-serving-cert\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.429421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddxvp\" (UniqueName: \"kubernetes.io/projected/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-kube-api-access-ddxvp\") pod \"controller-manager-857459b668-p4r5j\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.433935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.642976 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857459b668-p4r5j"] Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.758627 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" event={"ID":"cd42823a-83a6-4a22-bec9-8cd20753bdb1","Type":"ContainerDied","Data":"dd5961e69a10ba63bf8bd2f1b44940e2ad4bcb39ac31779cb0a23d821f06ce48"} Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.758714 4796 scope.go:117] "RemoveContainer" containerID="9b0ece05345c858b8b63c8f3b002b353e9a738b2bcfe4664988a779255a3d4c2" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.759389 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.760909 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.761432 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4mwh" event={"ID":"fa0261aa-ad16-4189-aaf8-6aacb68d1f1c","Type":"ContainerDied","Data":"c3c32755979083f2e3da1882daba1e51a1d91fa4d7952b71615dbc9e550f8cfe"} Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.763315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" event={"ID":"8c2c7c52-967b-4015-85bd-b5cefcc4b16c","Type":"ContainerStarted","Data":"f98a8acf70a37ebb60c54aa6f30895602ba5badc3bbb2f8f3e3b9645803b4f19"} Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.766227 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.805254 4796 scope.go:117] "RemoveContainer" containerID="8cdb7f6f8f311e88187a767e3d8df8dd438551736c52aa30316f6ac1c335bb8a" Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.827954 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c"] Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.831534 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4722c"] Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.839043 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4mwh"] Dec 12 04:38:34 crc kubenswrapper[4796]: I1212 04:38:34.842438 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4mwh"] Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.419380 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd42823a-83a6-4a22-bec9-8cd20753bdb1" path="/var/lib/kubelet/pods/cd42823a-83a6-4a22-bec9-8cd20753bdb1/volumes" Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.420053 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0261aa-ad16-4189-aaf8-6aacb68d1f1c" path="/var/lib/kubelet/pods/fa0261aa-ad16-4189-aaf8-6aacb68d1f1c/volumes" Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.777759 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" event={"ID":"8c2c7c52-967b-4015-85bd-b5cefcc4b16c","Type":"ContainerStarted","Data":"4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c"} Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.778009 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.781240 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.782807 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a27977980fd0327f6e95ef3249c076559db5bdc915f9b9bb1743b57009ae0297"} Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.785596 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:35 crc kubenswrapper[4796]: I1212 04:38:35.815726 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" podStartSLOduration=6.815697736 podStartE2EDuration="6.815697736s" podCreationTimestamp="2025-12-12 04:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:38:35.807436484 +0000 UTC m=+306.683453671" watchObservedRunningTime="2025-12-12 04:38:35.815697736 +0000 UTC m=+306.691714913" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.552817 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp"] Dec 12 04:38:36 crc kubenswrapper[4796]: E1212 04:38:36.553110 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd42823a-83a6-4a22-bec9-8cd20753bdb1" containerName="route-controller-manager" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.553136 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd42823a-83a6-4a22-bec9-8cd20753bdb1" containerName="route-controller-manager" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.553343 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd42823a-83a6-4a22-bec9-8cd20753bdb1" containerName="route-controller-manager" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.553974 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.555891 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.556421 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.556536 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.556718 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.556723 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.558346 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.576591 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp"] Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.752649 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzzr\" (UniqueName: \"kubernetes.io/projected/b9152892-d4f4-4fd8-bad1-b43ee19291d8-kube-api-access-9lzzr\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.752751 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-client-ca\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.752782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-config\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.752800 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9152892-d4f4-4fd8-bad1-b43ee19291d8-serving-cert\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.853585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-client-ca\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.853644 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-config\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.853742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9152892-d4f4-4fd8-bad1-b43ee19291d8-serving-cert\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.853782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lzzr\" (UniqueName: \"kubernetes.io/projected/b9152892-d4f4-4fd8-bad1-b43ee19291d8-kube-api-access-9lzzr\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.854788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-config\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.855188 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-client-ca\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.859919 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9152892-d4f4-4fd8-bad1-b43ee19291d8-serving-cert\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.870296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lzzr\" (UniqueName: \"kubernetes.io/projected/b9152892-d4f4-4fd8-bad1-b43ee19291d8-kube-api-access-9lzzr\") pod \"route-controller-manager-588756b8c7-knlxp\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:36 crc kubenswrapper[4796]: I1212 04:38:36.875052 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:37 crc kubenswrapper[4796]: I1212 04:38:37.360206 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp"] Dec 12 04:38:37 crc kubenswrapper[4796]: W1212 04:38:37.367875 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9152892_d4f4_4fd8_bad1_b43ee19291d8.slice/crio-1ff0569611bd8f4eb513f4d226d7327b3ffc40056adbd5164d0d27727b24ecd4 WatchSource:0}: Error finding container 1ff0569611bd8f4eb513f4d226d7327b3ffc40056adbd5164d0d27727b24ecd4: Status 404 returned error can't find the container with id 1ff0569611bd8f4eb513f4d226d7327b3ffc40056adbd5164d0d27727b24ecd4 Dec 12 04:38:37 crc kubenswrapper[4796]: I1212 04:38:37.795619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" event={"ID":"b9152892-d4f4-4fd8-bad1-b43ee19291d8","Type":"ContainerStarted","Data":"1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4"} Dec 12 04:38:37 crc kubenswrapper[4796]: I1212 04:38:37.795672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" event={"ID":"b9152892-d4f4-4fd8-bad1-b43ee19291d8","Type":"ContainerStarted","Data":"1ff0569611bd8f4eb513f4d226d7327b3ffc40056adbd5164d0d27727b24ecd4"} Dec 12 04:38:37 crc kubenswrapper[4796]: I1212 04:38:37.820147 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" podStartSLOduration=8.820125234 podStartE2EDuration="8.820125234s" podCreationTimestamp="2025-12-12 04:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:38:37.813515745 +0000 UTC m=+308.689532892" watchObservedRunningTime="2025-12-12 04:38:37.820125234 +0000 UTC m=+308.696142401" Dec 12 04:38:38 crc kubenswrapper[4796]: I1212 04:38:38.801966 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:38 crc kubenswrapper[4796]: I1212 04:38:38.808215 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:42 crc kubenswrapper[4796]: I1212 04:38:42.701888 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:38:42 crc kubenswrapper[4796]: I1212 04:38:42.707757 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:38:42 crc kubenswrapper[4796]: I1212 04:38:42.819345 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.079189 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7cbxm"] Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.080398 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.109185 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7cbxm"] Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.151576 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp"] Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.151769 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" podUID="b9152892-d4f4-4fd8-bad1-b43ee19291d8" containerName="route-controller-manager" containerID="cri-o://1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4" gracePeriod=30 Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.216330 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-857459b668-p4r5j"] Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.216632 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" podUID="8c2c7c52-967b-4015-85bd-b5cefcc4b16c" containerName="controller-manager" containerID="cri-o://4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c" gracePeriod=30 Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.267348 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-registry-certificates\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.267625 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-registry-tls\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.267731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.267835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-trusted-ca\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.267949 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.268054 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.268166 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692g7\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-kube-api-access-692g7\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.268250 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-bound-sa-token\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369416 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692g7\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-kube-api-access-692g7\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369517 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-bound-sa-token\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369543 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-registry-certificates\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-registry-tls\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369582 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369602 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-trusted-ca\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.369924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.370868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-trusted-ca\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.371027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-registry-certificates\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.378579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.392843 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-registry-tls\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.393522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.421076 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692g7\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-kube-api-access-692g7\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.441254 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11ab8cda-92cc-4626-aa92-2c0d3262b8c2-bound-sa-token\") pod \"image-registry-66df7c8f76-7cbxm\" (UID: \"11ab8cda-92cc-4626-aa92-2c0d3262b8c2\") " pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.704665 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.792553 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.834070 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.873035 4796 generic.go:334] "Generic (PLEG): container finished" podID="b9152892-d4f4-4fd8-bad1-b43ee19291d8" containerID="1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4" exitCode=0 Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.873155 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.873298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" event={"ID":"b9152892-d4f4-4fd8-bad1-b43ee19291d8","Type":"ContainerDied","Data":"1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4"} Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.873337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp" event={"ID":"b9152892-d4f4-4fd8-bad1-b43ee19291d8","Type":"ContainerDied","Data":"1ff0569611bd8f4eb513f4d226d7327b3ffc40056adbd5164d0d27727b24ecd4"} Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.873354 4796 scope.go:117] "RemoveContainer" containerID="1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.874998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-client-ca\") pod \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875026 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lzzr\" (UniqueName: \"kubernetes.io/projected/b9152892-d4f4-4fd8-bad1-b43ee19291d8-kube-api-access-9lzzr\") pod \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9152892-d4f4-4fd8-bad1-b43ee19291d8-serving-cert\") pod \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875118 4796 generic.go:334] "Generic (PLEG): container finished" podID="8c2c7c52-967b-4015-85bd-b5cefcc4b16c" containerID="4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c" exitCode=0 Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-config\") pod \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\" (UID: \"b9152892-d4f4-4fd8-bad1-b43ee19291d8\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875584 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875713 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" event={"ID":"8c2c7c52-967b-4015-85bd-b5cefcc4b16c","Type":"ContainerDied","Data":"4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c"} Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875729 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857459b668-p4r5j" event={"ID":"8c2c7c52-967b-4015-85bd-b5cefcc4b16c","Type":"ContainerDied","Data":"f98a8acf70a37ebb60c54aa6f30895602ba5badc3bbb2f8f3e3b9645803b4f19"} Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.875981 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-config" (OuterVolumeSpecName: "config") pod "b9152892-d4f4-4fd8-bad1-b43ee19291d8" (UID: "b9152892-d4f4-4fd8-bad1-b43ee19291d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.876397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9152892-d4f4-4fd8-bad1-b43ee19291d8" (UID: "b9152892-d4f4-4fd8-bad1-b43ee19291d8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.884419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9152892-d4f4-4fd8-bad1-b43ee19291d8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9152892-d4f4-4fd8-bad1-b43ee19291d8" (UID: "b9152892-d4f4-4fd8-bad1-b43ee19291d8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.889235 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9152892-d4f4-4fd8-bad1-b43ee19291d8-kube-api-access-9lzzr" (OuterVolumeSpecName: "kube-api-access-9lzzr") pod "b9152892-d4f4-4fd8-bad1-b43ee19291d8" (UID: "b9152892-d4f4-4fd8-bad1-b43ee19291d8"). InnerVolumeSpecName "kube-api-access-9lzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.899535 4796 scope.go:117] "RemoveContainer" containerID="1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4" Dec 12 04:38:52 crc kubenswrapper[4796]: E1212 04:38:52.900298 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4\": container with ID starting with 1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4 not found: ID does not exist" containerID="1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.900340 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4"} err="failed to get container status \"1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4\": rpc error: code = NotFound desc = could not find container \"1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4\": container with ID starting with 1d3dbb6cb62a32daa12b27d91855202f0b185ad5a9598dceaf08a63490fc13e4 not found: ID does not exist" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.900374 4796 scope.go:117] "RemoveContainer" containerID="4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.920881 4796 scope.go:117] "RemoveContainer" containerID="4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c" Dec 12 04:38:52 crc kubenswrapper[4796]: E1212 04:38:52.922157 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c\": container with ID starting with 4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c not found: ID does not exist" containerID="4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.922191 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c"} err="failed to get container status \"4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c\": rpc error: code = NotFound desc = could not find container \"4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c\": container with ID starting with 4c5cafe87e366206c8cda87ff956f49a441321f13cedbeffae200a6e79762a3c not found: ID does not exist" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.956781 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7cbxm"] Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.976663 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddxvp\" (UniqueName: \"kubernetes.io/projected/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-kube-api-access-ddxvp\") pod \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.976724 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-client-ca\") pod \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.976775 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-serving-cert\") pod \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.976847 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-config\") pod \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.976884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-proxy-ca-bundles\") pod \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\" (UID: \"8c2c7c52-967b-4015-85bd-b5cefcc4b16c\") " Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.977101 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.977118 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9152892-d4f4-4fd8-bad1-b43ee19291d8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.977128 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lzzr\" (UniqueName: \"kubernetes.io/projected/b9152892-d4f4-4fd8-bad1-b43ee19291d8-kube-api-access-9lzzr\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.977140 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9152892-d4f4-4fd8-bad1-b43ee19291d8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.978422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8c2c7c52-967b-4015-85bd-b5cefcc4b16c" (UID: "8c2c7c52-967b-4015-85bd-b5cefcc4b16c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.979236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c2c7c52-967b-4015-85bd-b5cefcc4b16c" (UID: "8c2c7c52-967b-4015-85bd-b5cefcc4b16c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.979790 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-config" (OuterVolumeSpecName: "config") pod "8c2c7c52-967b-4015-85bd-b5cefcc4b16c" (UID: "8c2c7c52-967b-4015-85bd-b5cefcc4b16c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.983877 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-kube-api-access-ddxvp" (OuterVolumeSpecName: "kube-api-access-ddxvp") pod "8c2c7c52-967b-4015-85bd-b5cefcc4b16c" (UID: "8c2c7c52-967b-4015-85bd-b5cefcc4b16c"). InnerVolumeSpecName "kube-api-access-ddxvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:38:52 crc kubenswrapper[4796]: I1212 04:38:52.985496 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c2c7c52-967b-4015-85bd-b5cefcc4b16c" (UID: "8c2c7c52-967b-4015-85bd-b5cefcc4b16c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.078614 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.078650 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.078665 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddxvp\" (UniqueName: \"kubernetes.io/projected/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-kube-api-access-ddxvp\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.078674 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.078682 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2c7c52-967b-4015-85bd-b5cefcc4b16c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.200611 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.204002 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588756b8c7-knlxp"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.212654 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-857459b668-p4r5j"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.217427 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-857459b668-p4r5j"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.421947 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2c7c52-967b-4015-85bd-b5cefcc4b16c" path="/var/lib/kubelet/pods/8c2c7c52-967b-4015-85bd-b5cefcc4b16c/volumes" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.422647 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9152892-d4f4-4fd8-bad1-b43ee19291d8" path="/var/lib/kubelet/pods/b9152892-d4f4-4fd8-bad1-b43ee19291d8/volumes" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.516573 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.591411 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k"] Dec 12 04:38:53 crc kubenswrapper[4796]: E1212 04:38:53.591640 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c7c52-967b-4015-85bd-b5cefcc4b16c" containerName="controller-manager" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.591655 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c7c52-967b-4015-85bd-b5cefcc4b16c" containerName="controller-manager" Dec 12 04:38:53 crc kubenswrapper[4796]: E1212 04:38:53.591677 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9152892-d4f4-4fd8-bad1-b43ee19291d8" containerName="route-controller-manager" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.591684 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9152892-d4f4-4fd8-bad1-b43ee19291d8" containerName="route-controller-manager" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.591768 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c7c52-967b-4015-85bd-b5cefcc4b16c" containerName="controller-manager" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.591778 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9152892-d4f4-4fd8-bad1-b43ee19291d8" containerName="route-controller-manager" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.592137 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.594344 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.594962 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.599245 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.599457 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.599653 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.599734 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.605939 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.606095 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.607231 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.607467 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.607588 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.607698 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.610919 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.611706 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.616757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.631989 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.673753 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t"] Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc895ae-37e3-47e3-a313-91e2d7d25ee8-serving-cert\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-serving-cert\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-client-ca\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685570 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkmr\" (UniqueName: \"kubernetes.io/projected/edc895ae-37e3-47e3-a313-91e2d7d25ee8-kube-api-access-5gkmr\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685592 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-config\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685612 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-client-ca\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685626 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-config\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56vwh\" (UniqueName: \"kubernetes.io/projected/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-kube-api-access-56vwh\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.685668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-proxy-ca-bundles\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56vwh\" (UniqueName: \"kubernetes.io/projected/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-kube-api-access-56vwh\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-proxy-ca-bundles\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787322 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc895ae-37e3-47e3-a313-91e2d7d25ee8-serving-cert\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-serving-cert\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-client-ca\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkmr\" (UniqueName: \"kubernetes.io/projected/edc895ae-37e3-47e3-a313-91e2d7d25ee8-kube-api-access-5gkmr\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787456 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-config\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-client-ca\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.787507 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-config\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.788856 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-proxy-ca-bundles\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.788868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-client-ca\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.789031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-config\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.789192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-config\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.789269 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-client-ca\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.793955 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc895ae-37e3-47e3-a313-91e2d7d25ee8-serving-cert\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.793993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-serving-cert\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.808439 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkmr\" (UniqueName: \"kubernetes.io/projected/edc895ae-37e3-47e3-a313-91e2d7d25ee8-kube-api-access-5gkmr\") pod \"route-controller-manager-5b8b6b7498-4pp6t\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.810434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56vwh\" (UniqueName: \"kubernetes.io/projected/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-kube-api-access-56vwh\") pod \"controller-manager-859bf9f4f9-v8z9k\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.881571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" event={"ID":"11ab8cda-92cc-4626-aa92-2c0d3262b8c2","Type":"ContainerStarted","Data":"be3e7822be70abf4f61725156395bf049f50ff4e2430d460eac3e3d20dbec67a"} Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.881609 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" event={"ID":"11ab8cda-92cc-4626-aa92-2c0d3262b8c2","Type":"ContainerStarted","Data":"32dd9a016783c667ba141b10fc142c5279780dca4551e9265800dc1c71a69485"} Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.881663 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.908864 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:53 crc kubenswrapper[4796]: I1212 04:38:53.926399 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.185253 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" podStartSLOduration=2.185237797 podStartE2EDuration="2.185237797s" podCreationTimestamp="2025-12-12 04:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:38:53.906426083 +0000 UTC m=+324.782443230" watchObservedRunningTime="2025-12-12 04:38:54.185237797 +0000 UTC m=+325.061254944" Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.187445 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k"] Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.440178 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t"] Dec 12 04:38:54 crc kubenswrapper[4796]: W1212 04:38:54.450524 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc895ae_37e3_47e3_a313_91e2d7d25ee8.slice/crio-1bb742af648a7ee6496b2e14d2489c8ded33dd21e9799d0a7d3552dd993a14b2 WatchSource:0}: Error finding container 1bb742af648a7ee6496b2e14d2489c8ded33dd21e9799d0a7d3552dd993a14b2: Status 404 returned error can't find the container with id 1bb742af648a7ee6496b2e14d2489c8ded33dd21e9799d0a7d3552dd993a14b2 Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.888156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" event={"ID":"edc895ae-37e3-47e3-a313-91e2d7d25ee8","Type":"ContainerStarted","Data":"00e06dd8dcb6fad7ae180a5c219b6931eb340158742998e700cd8eb7ea01ad2a"} Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.888223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" event={"ID":"edc895ae-37e3-47e3-a313-91e2d7d25ee8","Type":"ContainerStarted","Data":"1bb742af648a7ee6496b2e14d2489c8ded33dd21e9799d0a7d3552dd993a14b2"} Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.888623 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.889823 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" event={"ID":"3aebcd83-2d6f-4f3d-a28d-313b3756c65f","Type":"ContainerStarted","Data":"c05a1403803fee7638f14450d31798394768fa8e6533b9fc10b87d39035570bc"} Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.889875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" event={"ID":"3aebcd83-2d6f-4f3d-a28d-313b3756c65f","Type":"ContainerStarted","Data":"5ec00755f78aa59adc873d5ab8b76f308bf9cf51809a2c65ada8f9d858e53c83"} Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.890150 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.896507 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 04:38:54 crc kubenswrapper[4796]: I1212 04:38:54.912496 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" podStartSLOduration=2.912481364 podStartE2EDuration="2.912481364s" podCreationTimestamp="2025-12-12 04:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:38:54.912125833 +0000 UTC m=+325.788143000" watchObservedRunningTime="2025-12-12 04:38:54.912481364 +0000 UTC m=+325.788498511" Dec 12 04:38:55 crc kubenswrapper[4796]: I1212 04:38:55.202890 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 04:38:55 crc kubenswrapper[4796]: I1212 04:38:55.222192 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" podStartSLOduration=3.222175747 podStartE2EDuration="3.222175747s" podCreationTimestamp="2025-12-12 04:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:38:54.951869364 +0000 UTC m=+325.827886511" watchObservedRunningTime="2025-12-12 04:38:55.222175747 +0000 UTC m=+326.098192884" Dec 12 04:39:12 crc kubenswrapper[4796]: I1212 04:39:12.716650 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7cbxm" Dec 12 04:39:12 crc kubenswrapper[4796]: I1212 04:39:12.810179 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwr2j"] Dec 12 04:39:32 crc kubenswrapper[4796]: I1212 04:39:32.969311 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:39:32 crc kubenswrapper[4796]: I1212 04:39:32.970844 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:39:37 crc kubenswrapper[4796]: I1212 04:39:37.862053 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" podUID="78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" containerName="registry" containerID="cri-o://e7c878f00842cd45a5cb23e7f3ffa8ae23e39e90ff63916de802b3a2d17cd10b" gracePeriod=30 Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.144913 4796 generic.go:334] "Generic (PLEG): container finished" podID="78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" containerID="e7c878f00842cd45a5cb23e7f3ffa8ae23e39e90ff63916de802b3a2d17cd10b" exitCode=0 Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.144947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" event={"ID":"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3","Type":"ContainerDied","Data":"e7c878f00842cd45a5cb23e7f3ffa8ae23e39e90ff63916de802b3a2d17cd10b"} Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.252438 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.322688 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-tls\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.322743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-ca-trust-extracted\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.322764 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59flv\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-kube-api-access-59flv\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.322800 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-bound-sa-token\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.322829 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-certificates\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.322872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-installation-pull-secrets\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.323620 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-trusted-ca\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.323559 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.324001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.324070 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\" (UID: \"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3\") " Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.325024 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.325043 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.328591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.331106 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.332123 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-kube-api-access-59flv" (OuterVolumeSpecName: "kube-api-access-59flv") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "kube-api-access-59flv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.335627 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.338095 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.339946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" (UID: "78c92736-cc3b-4cd7-a5f3-41cdb62f39d3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.426230 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.426304 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.426317 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59flv\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-kube-api-access-59flv\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.426325 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:38 crc kubenswrapper[4796]: I1212 04:39:38.426337 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 04:39:39 crc kubenswrapper[4796]: I1212 04:39:39.150484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" event={"ID":"78c92736-cc3b-4cd7-a5f3-41cdb62f39d3","Type":"ContainerDied","Data":"00bc8bd33c61736a5c5af8bd0389caa33ee55620770cfd68227e7478af173a98"} Dec 12 04:39:39 crc kubenswrapper[4796]: I1212 04:39:39.150520 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mwr2j" Dec 12 04:39:39 crc kubenswrapper[4796]: I1212 04:39:39.150803 4796 scope.go:117] "RemoveContainer" containerID="e7c878f00842cd45a5cb23e7f3ffa8ae23e39e90ff63916de802b3a2d17cd10b" Dec 12 04:39:39 crc kubenswrapper[4796]: I1212 04:39:39.186660 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwr2j"] Dec 12 04:39:39 crc kubenswrapper[4796]: I1212 04:39:39.193592 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mwr2j"] Dec 12 04:39:39 crc kubenswrapper[4796]: I1212 04:39:39.417964 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" path="/var/lib/kubelet/pods/78c92736-cc3b-4cd7-a5f3-41cdb62f39d3/volumes" Dec 12 04:40:02 crc kubenswrapper[4796]: I1212 04:40:02.969862 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:40:02 crc kubenswrapper[4796]: I1212 04:40:02.970487 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:40:32 crc kubenswrapper[4796]: I1212 04:40:32.969444 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:40:32 crc kubenswrapper[4796]: I1212 04:40:32.971696 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:40:32 crc kubenswrapper[4796]: I1212 04:40:32.971826 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:40:32 crc kubenswrapper[4796]: I1212 04:40:32.972681 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"827c3757d54f0b605c07f0ca45f13181c33c63c44a363eb4f827fef0f73982df"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:40:32 crc kubenswrapper[4796]: I1212 04:40:32.972760 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://827c3757d54f0b605c07f0ca45f13181c33c63c44a363eb4f827fef0f73982df" gracePeriod=600 Dec 12 04:40:34 crc kubenswrapper[4796]: I1212 04:40:34.082139 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="827c3757d54f0b605c07f0ca45f13181c33c63c44a363eb4f827fef0f73982df" exitCode=0 Dec 12 04:40:34 crc kubenswrapper[4796]: I1212 04:40:34.082196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"827c3757d54f0b605c07f0ca45f13181c33c63c44a363eb4f827fef0f73982df"} Dec 12 04:40:34 crc kubenswrapper[4796]: I1212 04:40:34.082520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"06fce13dec8dc4d862dfbf1daa8b85318efca122c15dc08cd96ce9e70aee14aa"} Dec 12 04:40:34 crc kubenswrapper[4796]: I1212 04:40:34.083261 4796 scope.go:117] "RemoveContainer" containerID="59d79e93f64f3efb1a443964a601359ae16a5eeafa246d27798766d8652e1fe5" Dec 12 04:43:02 crc kubenswrapper[4796]: I1212 04:43:02.969662 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:43:02 crc kubenswrapper[4796]: I1212 04:43:02.972205 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:43:32 crc kubenswrapper[4796]: I1212 04:43:32.970340 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:43:32 crc kubenswrapper[4796]: I1212 04:43:32.971095 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:44:02 crc kubenswrapper[4796]: I1212 04:44:02.969643 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:44:02 crc kubenswrapper[4796]: I1212 04:44:02.970242 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:44:02 crc kubenswrapper[4796]: I1212 04:44:02.970769 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:44:02 crc kubenswrapper[4796]: I1212 04:44:02.971452 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06fce13dec8dc4d862dfbf1daa8b85318efca122c15dc08cd96ce9e70aee14aa"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:44:02 crc kubenswrapper[4796]: I1212 04:44:02.971534 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://06fce13dec8dc4d862dfbf1daa8b85318efca122c15dc08cd96ce9e70aee14aa" gracePeriod=600 Dec 12 04:44:03 crc kubenswrapper[4796]: I1212 04:44:03.264513 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="06fce13dec8dc4d862dfbf1daa8b85318efca122c15dc08cd96ce9e70aee14aa" exitCode=0 Dec 12 04:44:03 crc kubenswrapper[4796]: I1212 04:44:03.264710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"06fce13dec8dc4d862dfbf1daa8b85318efca122c15dc08cd96ce9e70aee14aa"} Dec 12 04:44:03 crc kubenswrapper[4796]: I1212 04:44:03.264965 4796 scope.go:117] "RemoveContainer" containerID="827c3757d54f0b605c07f0ca45f13181c33c63c44a363eb4f827fef0f73982df" Dec 12 04:44:04 crc kubenswrapper[4796]: I1212 04:44:04.271595 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"276b61fb2fa37553e2279ac84eab51942aa3dddc3e5b7b40311531ace1182b7d"} Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.495553 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-k98g8"] Dec 12 04:44:50 crc kubenswrapper[4796]: E1212 04:44:50.496321 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" containerName="registry" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.496338 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" containerName="registry" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.496460 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c92736-cc3b-4cd7-a5f3-41cdb62f39d3" containerName="registry" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.496873 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.500324 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.500382 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mpt4g" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.503388 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ww44f"] Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.504007 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ww44f" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.506634 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hxw6w" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.509681 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.516779 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-k98g8"] Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.525575 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-lzcb5"] Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.526214 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.529178 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bc662" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.542302 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ww44f"] Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.560827 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-lzcb5"] Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.575782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgw9t\" (UniqueName: \"kubernetes.io/projected/8860987b-111c-4cd3-b138-4cce9dce0ad8-kube-api-access-wgw9t\") pod \"cert-manager-5b446d88c5-ww44f\" (UID: \"8860987b-111c-4cd3-b138-4cce9dce0ad8\") " pod="cert-manager/cert-manager-5b446d88c5-ww44f" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.575852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997qw\" (UniqueName: \"kubernetes.io/projected/5702c38e-0a66-415d-ba4a-6a32f7dbbc70-kube-api-access-997qw\") pod \"cert-manager-webhook-5655c58dd6-lzcb5\" (UID: \"5702c38e-0a66-415d-ba4a-6a32f7dbbc70\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.575872 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/80433c18-6202-449a-8982-1d738afc9e14-kube-api-access-txfhg\") pod \"cert-manager-cainjector-7f985d654d-k98g8\" (UID: \"80433c18-6202-449a-8982-1d738afc9e14\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.676542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgw9t\" (UniqueName: \"kubernetes.io/projected/8860987b-111c-4cd3-b138-4cce9dce0ad8-kube-api-access-wgw9t\") pod \"cert-manager-5b446d88c5-ww44f\" (UID: \"8860987b-111c-4cd3-b138-4cce9dce0ad8\") " pod="cert-manager/cert-manager-5b446d88c5-ww44f" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.676598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997qw\" (UniqueName: \"kubernetes.io/projected/5702c38e-0a66-415d-ba4a-6a32f7dbbc70-kube-api-access-997qw\") pod \"cert-manager-webhook-5655c58dd6-lzcb5\" (UID: \"5702c38e-0a66-415d-ba4a-6a32f7dbbc70\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.676620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/80433c18-6202-449a-8982-1d738afc9e14-kube-api-access-txfhg\") pod \"cert-manager-cainjector-7f985d654d-k98g8\" (UID: \"80433c18-6202-449a-8982-1d738afc9e14\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.772771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgw9t\" (UniqueName: \"kubernetes.io/projected/8860987b-111c-4cd3-b138-4cce9dce0ad8-kube-api-access-wgw9t\") pod \"cert-manager-5b446d88c5-ww44f\" (UID: \"8860987b-111c-4cd3-b138-4cce9dce0ad8\") " pod="cert-manager/cert-manager-5b446d88c5-ww44f" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.773082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/80433c18-6202-449a-8982-1d738afc9e14-kube-api-access-txfhg\") pod \"cert-manager-cainjector-7f985d654d-k98g8\" (UID: \"80433c18-6202-449a-8982-1d738afc9e14\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.774955 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997qw\" (UniqueName: \"kubernetes.io/projected/5702c38e-0a66-415d-ba4a-6a32f7dbbc70-kube-api-access-997qw\") pod \"cert-manager-webhook-5655c58dd6-lzcb5\" (UID: \"5702c38e-0a66-415d-ba4a-6a32f7dbbc70\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.812550 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.823340 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ww44f" Dec 12 04:44:50 crc kubenswrapper[4796]: I1212 04:44:50.838694 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.092344 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ww44f"] Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.112243 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.152775 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-lzcb5"] Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.189241 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-k98g8"] Dec 12 04:44:51 crc kubenswrapper[4796]: W1212 04:44:51.189610 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80433c18_6202_449a_8982_1d738afc9e14.slice/crio-8a720819f7b1fcec5d4f77cb6593e315a8639a5eb73517ada94767327de5491f WatchSource:0}: Error finding container 8a720819f7b1fcec5d4f77cb6593e315a8639a5eb73517ada94767327de5491f: Status 404 returned error can't find the container with id 8a720819f7b1fcec5d4f77cb6593e315a8639a5eb73517ada94767327de5491f Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.540485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" event={"ID":"80433c18-6202-449a-8982-1d738afc9e14","Type":"ContainerStarted","Data":"8a720819f7b1fcec5d4f77cb6593e315a8639a5eb73517ada94767327de5491f"} Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.541185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ww44f" event={"ID":"8860987b-111c-4cd3-b138-4cce9dce0ad8","Type":"ContainerStarted","Data":"922538184f5730f53cfe8f78cfa4e4f7ed777075cc882d403a8e9d50fe2d2b23"} Dec 12 04:44:51 crc kubenswrapper[4796]: I1212 04:44:51.542065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" event={"ID":"5702c38e-0a66-415d-ba4a-6a32f7dbbc70","Type":"ContainerStarted","Data":"12984e7cddf3e5cc10ea0c6b90c3c2d085c483b116aee4dece2b0fdbdebee8af"} Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.577709 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" event={"ID":"5702c38e-0a66-415d-ba4a-6a32f7dbbc70","Type":"ContainerStarted","Data":"1aeef39219b38356fe2f1221ff76c442a321f9b5220795d38008e2059cf9b745"} Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.578339 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.580232 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" event={"ID":"80433c18-6202-449a-8982-1d738afc9e14","Type":"ContainerStarted","Data":"942cb0af2a7c3abebbbf20f3427f55438ff9dd5f9ce01d393dfe38f14f48e213"} Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.583649 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ww44f" event={"ID":"8860987b-111c-4cd3-b138-4cce9dce0ad8","Type":"ContainerStarted","Data":"38a4a3f1b18d69bfe16391b551e170bb139da4b86b179d9b3ecb3458e1d82d63"} Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.595732 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" podStartSLOduration=2.353201897 podStartE2EDuration="5.595720477s" podCreationTimestamp="2025-12-12 04:44:50 +0000 UTC" firstStartedPulling="2025-12-12 04:44:51.167259302 +0000 UTC m=+682.043276449" lastFinishedPulling="2025-12-12 04:44:54.409777882 +0000 UTC m=+685.285795029" observedRunningTime="2025-12-12 04:44:55.595545081 +0000 UTC m=+686.471562228" watchObservedRunningTime="2025-12-12 04:44:55.595720477 +0000 UTC m=+686.471737624" Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.614491 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-k98g8" podStartSLOduration=2.330971091 podStartE2EDuration="5.614471773s" podCreationTimestamp="2025-12-12 04:44:50 +0000 UTC" firstStartedPulling="2025-12-12 04:44:51.191718017 +0000 UTC m=+682.067735164" lastFinishedPulling="2025-12-12 04:44:54.475218699 +0000 UTC m=+685.351235846" observedRunningTime="2025-12-12 04:44:55.6130807 +0000 UTC m=+686.489097847" watchObservedRunningTime="2025-12-12 04:44:55.614471773 +0000 UTC m=+686.490488920" Dec 12 04:44:55 crc kubenswrapper[4796]: I1212 04:44:55.636671 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-ww44f" podStartSLOduration=2.331629521 podStartE2EDuration="5.636646937s" podCreationTimestamp="2025-12-12 04:44:50 +0000 UTC" firstStartedPulling="2025-12-12 04:44:51.111983732 +0000 UTC m=+681.988000889" lastFinishedPulling="2025-12-12 04:44:54.417001158 +0000 UTC m=+685.293018305" observedRunningTime="2025-12-12 04:44:55.626321874 +0000 UTC m=+686.502339021" watchObservedRunningTime="2025-12-12 04:44:55.636646937 +0000 UTC m=+686.512664084" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.163004 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf"] Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.164925 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.167711 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.168073 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.184528 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf"] Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.211883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-config-volume\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.212161 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlhr\" (UniqueName: \"kubernetes.io/projected/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-kube-api-access-wtlhr\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.212271 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-secret-volume\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.313074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlhr\" (UniqueName: \"kubernetes.io/projected/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-kube-api-access-wtlhr\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.313148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-secret-volume\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.313189 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-config-volume\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.314087 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-config-volume\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.328099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-secret-volume\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.329269 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlhr\" (UniqueName: \"kubernetes.io/projected/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-kube-api-access-wtlhr\") pod \"collect-profiles-29425245-gkhxf\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.488926 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.841609 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-lzcb5" Dec 12 04:45:00 crc kubenswrapper[4796]: I1212 04:45:00.866873 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf"] Dec 12 04:45:01 crc kubenswrapper[4796]: I1212 04:45:01.616909 4796 generic.go:334] "Generic (PLEG): container finished" podID="0ffeb1a8-57e6-4f77-a770-a48e7f99910f" containerID="01893e39c3f3a0ff3aea579b28212e83be4cd6d80df7503186350345b176bff2" exitCode=0 Dec 12 04:45:01 crc kubenswrapper[4796]: I1212 04:45:01.616970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" event={"ID":"0ffeb1a8-57e6-4f77-a770-a48e7f99910f","Type":"ContainerDied","Data":"01893e39c3f3a0ff3aea579b28212e83be4cd6d80df7503186350345b176bff2"} Dec 12 04:45:01 crc kubenswrapper[4796]: I1212 04:45:01.617185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" event={"ID":"0ffeb1a8-57e6-4f77-a770-a48e7f99910f","Type":"ContainerStarted","Data":"37542dfcf760277bff8d08b290e5083378ca02b6784e38bcdc6c69f50261cc2c"} Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.855761 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.945707 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-config-volume\") pod \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.945749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-secret-volume\") pod \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.945773 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtlhr\" (UniqueName: \"kubernetes.io/projected/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-kube-api-access-wtlhr\") pod \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\" (UID: \"0ffeb1a8-57e6-4f77-a770-a48e7f99910f\") " Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.946570 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ffeb1a8-57e6-4f77-a770-a48e7f99910f" (UID: "0ffeb1a8-57e6-4f77-a770-a48e7f99910f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.950671 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ffeb1a8-57e6-4f77-a770-a48e7f99910f" (UID: "0ffeb1a8-57e6-4f77-a770-a48e7f99910f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:45:02 crc kubenswrapper[4796]: I1212 04:45:02.950884 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-kube-api-access-wtlhr" (OuterVolumeSpecName: "kube-api-access-wtlhr") pod "0ffeb1a8-57e6-4f77-a770-a48e7f99910f" (UID: "0ffeb1a8-57e6-4f77-a770-a48e7f99910f"). InnerVolumeSpecName "kube-api-access-wtlhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:45:03 crc kubenswrapper[4796]: I1212 04:45:03.047188 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:03 crc kubenswrapper[4796]: I1212 04:45:03.047214 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:03 crc kubenswrapper[4796]: I1212 04:45:03.047224 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtlhr\" (UniqueName: \"kubernetes.io/projected/0ffeb1a8-57e6-4f77-a770-a48e7f99910f-kube-api-access-wtlhr\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:03 crc kubenswrapper[4796]: I1212 04:45:03.631184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" event={"ID":"0ffeb1a8-57e6-4f77-a770-a48e7f99910f","Type":"ContainerDied","Data":"37542dfcf760277bff8d08b290e5083378ca02b6784e38bcdc6c69f50261cc2c"} Dec 12 04:45:03 crc kubenswrapper[4796]: I1212 04:45:03.631575 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37542dfcf760277bff8d08b290e5083378ca02b6784e38bcdc6c69f50261cc2c" Dec 12 04:45:03 crc kubenswrapper[4796]: I1212 04:45:03.631414 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf" Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.611877 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-996v7"] Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614410 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-controller" containerID="cri-o://c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614569 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-acl-logging" containerID="cri-o://894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614504 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="nbdb" containerID="cri-o://a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614477 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="northd" containerID="cri-o://d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614518 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614515 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-node" containerID="cri-o://e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.614485 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="sbdb" containerID="cri-o://afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.676272 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" containerID="cri-o://785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" gracePeriod=30 Dec 12 04:45:07 crc kubenswrapper[4796]: E1212 04:45:07.731705 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439475ac_7f06_4a47_9a81_9f4cf4083c38.slice/crio-conmon-c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439475ac_7f06_4a47_9a81_9f4cf4083c38.slice/crio-c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439475ac_7f06_4a47_9a81_9f4cf4083c38.slice/crio-conmon-674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184.scope\": RecentStats: unable to find data in memory cache]" Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.966195 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/3.log" Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.968601 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovn-acl-logging/0.log" Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.969142 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovn-controller/0.log" Dec 12 04:45:07 crc kubenswrapper[4796]: I1212 04:45:07.969498 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008162 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-netns\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008196 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-kubelet\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008225 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovn-node-metrics-cert\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008250 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfckw\" (UniqueName: \"kubernetes.io/projected/439475ac-7f06-4a47-9a81-9f4cf4083c38-kube-api-access-nfckw\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008270 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-var-lib-cni-networks-ovn-kubernetes\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008306 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-openvswitch\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008326 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-env-overrides\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008341 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-bin\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008360 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-slash\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008374 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-etc-openvswitch\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008394 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-config\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008411 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-systemd\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008438 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-ovn-kubernetes\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008455 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-var-lib-openvswitch\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-ovn\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-log-socket\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008513 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-node-log\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-script-lib\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008554 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-netd\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008568 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-systemd-units\") pod \"439475ac-7f06-4a47-9a81-9f4cf4083c38\" (UID: \"439475ac-7f06-4a47-9a81-9f4cf4083c38\") " Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008348 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009304 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008378 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008393 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008751 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008766 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008778 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008789 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-log-socket" (OuterVolumeSpecName: "log-socket") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.008843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-node-log" (OuterVolumeSpecName: "node-log") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009171 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-slash" (OuterVolumeSpecName: "host-slash") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009255 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009265 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009624 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.009638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.010252 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.016690 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021356 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439475ac-7f06-4a47-9a81-9f4cf4083c38-kube-api-access-nfckw" (OuterVolumeSpecName: "kube-api-access-nfckw") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "kube-api-access-nfckw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021402 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4jkcb"] Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021657 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021674 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021682 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffeb1a8-57e6-4f77-a770-a48e7f99910f" containerName="collect-profiles" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021688 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffeb1a8-57e6-4f77-a770-a48e7f99910f" containerName="collect-profiles" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021719 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kubecfg-setup" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021726 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kubecfg-setup" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021734 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="nbdb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021739 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="nbdb" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021750 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021755 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021764 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021770 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021777 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-node" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021782 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-node" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021792 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021797 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021807 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-acl-logging" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021813 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-acl-logging" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021821 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021827 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021837 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021842 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021851 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="sbdb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021857 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="sbdb" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.021870 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="northd" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.021877 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="northd" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022079 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022093 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="sbdb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022103 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022112 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022119 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovn-acl-logging" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022128 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022137 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022144 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffeb1a8-57e6-4f77-a770-a48e7f99910f" containerName="collect-profiles" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022154 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="kube-rbac-proxy-node" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022166 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="northd" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022174 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="nbdb" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.022306 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022316 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022412 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.022570 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerName="ovnkube-controller" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.024435 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.026144 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "439475ac-7f06-4a47-9a81-9f4cf4083c38" (UID: "439475ac-7f06-4a47-9a81-9f4cf4083c38"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110224 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-run-netns\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovnkube-config\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110693 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-var-lib-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-kubelet\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110795 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110823 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-cni-bin\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110850 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovn-node-metrics-cert\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110869 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-systemd-units\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.110895 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-etc-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mf5q\" (UniqueName: \"kubernetes.io/projected/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-kube-api-access-5mf5q\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-env-overrides\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111105 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-node-log\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-run-ovn-kubernetes\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111245 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovnkube-script-lib\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111272 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-ovn\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111331 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-systemd\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-log-socket\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-cni-netd\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111404 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-slash\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111497 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111511 4796 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111522 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111533 4796 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111545 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111559 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfckw\" (UniqueName: \"kubernetes.io/projected/439475ac-7f06-4a47-9a81-9f4cf4083c38-kube-api-access-nfckw\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111570 4796 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111581 4796 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111595 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111605 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111616 4796 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111626 4796 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111637 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111648 4796 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111658 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111669 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111680 4796 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111691 4796 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111702 4796 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/439475ac-7f06-4a47-9a81-9f4cf4083c38-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.111713 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/439475ac-7f06-4a47-9a81-9f4cf4083c38-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213114 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213239 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213502 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-run-ovn-kubernetes\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213555 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovnkube-script-lib\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-run-ovn-kubernetes\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-ovn\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-systemd\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213668 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-log-socket\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-cni-netd\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-slash\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovnkube-config\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-run-netns\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-var-lib-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-kubelet\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.213976 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-cni-bin\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214011 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovn-node-metrics-cert\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-systemd-units\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-etc-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214132 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mf5q\" (UniqueName: \"kubernetes.io/projected/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-kube-api-access-5mf5q\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-env-overrides\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-node-log\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214342 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-node-log\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-ovn\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214430 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovnkube-script-lib\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214432 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-systemd\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214460 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-log-socket\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214491 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-run-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214499 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-cni-bin\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214520 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-cni-netd\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.214548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-slash\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-etc-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovnkube-config\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215125 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-run-netns\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-systemd-units\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-var-lib-openvswitch\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-host-kubelet\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.215656 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-env-overrides\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.219477 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-ovn-node-metrics-cert\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.245382 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mf5q\" (UniqueName: \"kubernetes.io/projected/4d24ca65-ecb9-4dfd-8feb-63cfc30a871e-kube-api-access-5mf5q\") pod \"ovnkube-node-4jkcb\" (UID: \"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e\") " pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.341530 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:08 crc kubenswrapper[4796]: W1212 04:45:08.376396 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d24ca65_ecb9_4dfd_8feb_63cfc30a871e.slice/crio-fdbb8585c268b777bed4f04452b172f8f1fa8ff98f323765c3c23ec1232a0c0e WatchSource:0}: Error finding container fdbb8585c268b777bed4f04452b172f8f1fa8ff98f323765c3c23ec1232a0c0e: Status 404 returned error can't find the container with id fdbb8585c268b777bed4f04452b172f8f1fa8ff98f323765c3c23ec1232a0c0e Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.667513 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovnkube-controller/3.log" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.671031 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovn-acl-logging/0.log" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.671610 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-996v7_439475ac-7f06-4a47-9a81-9f4cf4083c38/ovn-controller/0.log" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672059 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672096 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672113 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672125 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672137 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672149 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672159 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" exitCode=143 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672171 4796 generic.go:334] "Generic (PLEG): container finished" podID="439475ac-7f06-4a47-9a81-9f4cf4083c38" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" exitCode=143 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672356 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672378 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672442 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672460 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672469 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672481 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672490 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672500 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672509 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672518 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672533 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672560 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672571 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672580 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672590 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672599 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672607 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672616 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672625 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672634 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672643 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672676 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672686 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672695 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672704 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672713 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672722 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672731 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672740 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672749 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672759 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672771 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" event={"ID":"439475ac-7f06-4a47-9a81-9f4cf4083c38","Type":"ContainerDied","Data":"e4251de98e807675579ba5b1b3f2a10a55f2438c66a1cf3c461ac0b9468fb09c"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672787 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672805 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672815 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672825 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672835 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672844 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672853 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672862 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672870 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672879 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.672901 4796 scope.go:117] "RemoveContainer" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.673228 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-996v7" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.674907 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/2.log" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.675585 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/1.log" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.675925 4796 generic.go:334] "Generic (PLEG): container finished" podID="55b96fce-0e56-40cb-ab90-873a8421260b" containerID="414a02d2d1b8c6bef9995acc8d6d8a11fd7a85b8235740c990a18fd12c22fdb3" exitCode=2 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.675998 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerDied","Data":"414a02d2d1b8c6bef9995acc8d6d8a11fd7a85b8235740c990a18fd12c22fdb3"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.676215 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.677444 4796 generic.go:334] "Generic (PLEG): container finished" podID="4d24ca65-ecb9-4dfd-8feb-63cfc30a871e" containerID="727f3d0eaba307f94827225b70b558fdc22bb25303e846c36b3c93ab083a0e38" exitCode=0 Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.677496 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerDied","Data":"727f3d0eaba307f94827225b70b558fdc22bb25303e846c36b3c93ab083a0e38"} Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.677504 4796 scope.go:117] "RemoveContainer" containerID="414a02d2d1b8c6bef9995acc8d6d8a11fd7a85b8235740c990a18fd12c22fdb3" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.677526 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"fdbb8585c268b777bed4f04452b172f8f1fa8ff98f323765c3c23ec1232a0c0e"} Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.677798 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b68x4_openshift-multus(55b96fce-0e56-40cb-ab90-873a8421260b)\"" pod="openshift-multus/multus-b68x4" podUID="55b96fce-0e56-40cb-ab90-873a8421260b" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.691385 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.724476 4796 scope.go:117] "RemoveContainer" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.754028 4796 scope.go:117] "RemoveContainer" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.771642 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-996v7"] Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.775750 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-996v7"] Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.793018 4796 scope.go:117] "RemoveContainer" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.806261 4796 scope.go:117] "RemoveContainer" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.817317 4796 scope.go:117] "RemoveContainer" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.829098 4796 scope.go:117] "RemoveContainer" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.842476 4796 scope.go:117] "RemoveContainer" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.856430 4796 scope.go:117] "RemoveContainer" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.869527 4796 scope.go:117] "RemoveContainer" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.874849 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": container with ID starting with 785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120 not found: ID does not exist" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.874896 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} err="failed to get container status \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": rpc error: code = NotFound desc = could not find container \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": container with ID starting with 785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.874926 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.875212 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": container with ID starting with 1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540 not found: ID does not exist" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.875241 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} err="failed to get container status \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": rpc error: code = NotFound desc = could not find container \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": container with ID starting with 1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.875259 4796 scope.go:117] "RemoveContainer" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.875680 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": container with ID starting with afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114 not found: ID does not exist" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.875735 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} err="failed to get container status \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": rpc error: code = NotFound desc = could not find container \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": container with ID starting with afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.875759 4796 scope.go:117] "RemoveContainer" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.876623 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": container with ID starting with a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06 not found: ID does not exist" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.876654 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} err="failed to get container status \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": rpc error: code = NotFound desc = could not find container \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": container with ID starting with a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.876671 4796 scope.go:117] "RemoveContainer" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.876949 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": container with ID starting with d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471 not found: ID does not exist" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.876973 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} err="failed to get container status \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": rpc error: code = NotFound desc = could not find container \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": container with ID starting with d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877022 4796 scope.go:117] "RemoveContainer" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.877340 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": container with ID starting with 674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184 not found: ID does not exist" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877388 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} err="failed to get container status \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": rpc error: code = NotFound desc = could not find container \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": container with ID starting with 674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877405 4796 scope.go:117] "RemoveContainer" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.877670 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": container with ID starting with e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c not found: ID does not exist" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877696 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} err="failed to get container status \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": rpc error: code = NotFound desc = could not find container \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": container with ID starting with e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877713 4796 scope.go:117] "RemoveContainer" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.877937 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": container with ID starting with 894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e not found: ID does not exist" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877959 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} err="failed to get container status \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": rpc error: code = NotFound desc = could not find container \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": container with ID starting with 894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.877975 4796 scope.go:117] "RemoveContainer" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.878224 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": container with ID starting with c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2 not found: ID does not exist" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.878244 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} err="failed to get container status \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": rpc error: code = NotFound desc = could not find container \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": container with ID starting with c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.878261 4796 scope.go:117] "RemoveContainer" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" Dec 12 04:45:08 crc kubenswrapper[4796]: E1212 04:45:08.878486 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": container with ID starting with 7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37 not found: ID does not exist" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.878509 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} err="failed to get container status \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": rpc error: code = NotFound desc = could not find container \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": container with ID starting with 7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.878525 4796 scope.go:117] "RemoveContainer" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.878787 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} err="failed to get container status \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": rpc error: code = NotFound desc = could not find container \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": container with ID starting with 785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.878806 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879213 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} err="failed to get container status \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": rpc error: code = NotFound desc = could not find container \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": container with ID starting with 1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879236 4796 scope.go:117] "RemoveContainer" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879492 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} err="failed to get container status \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": rpc error: code = NotFound desc = could not find container \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": container with ID starting with afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879516 4796 scope.go:117] "RemoveContainer" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879719 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} err="failed to get container status \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": rpc error: code = NotFound desc = could not find container \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": container with ID starting with a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879743 4796 scope.go:117] "RemoveContainer" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.879983 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} err="failed to get container status \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": rpc error: code = NotFound desc = could not find container \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": container with ID starting with d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880008 4796 scope.go:117] "RemoveContainer" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880239 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} err="failed to get container status \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": rpc error: code = NotFound desc = could not find container \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": container with ID starting with 674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880261 4796 scope.go:117] "RemoveContainer" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880502 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} err="failed to get container status \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": rpc error: code = NotFound desc = could not find container \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": container with ID starting with e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880523 4796 scope.go:117] "RemoveContainer" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880848 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} err="failed to get container status \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": rpc error: code = NotFound desc = could not find container \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": container with ID starting with 894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.880901 4796 scope.go:117] "RemoveContainer" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.881241 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} err="failed to get container status \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": rpc error: code = NotFound desc = could not find container \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": container with ID starting with c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.881268 4796 scope.go:117] "RemoveContainer" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.881544 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} err="failed to get container status \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": rpc error: code = NotFound desc = could not find container \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": container with ID starting with 7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.881567 4796 scope.go:117] "RemoveContainer" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.881774 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} err="failed to get container status \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": rpc error: code = NotFound desc = could not find container \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": container with ID starting with 785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.881796 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.882099 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} err="failed to get container status \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": rpc error: code = NotFound desc = could not find container \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": container with ID starting with 1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.882159 4796 scope.go:117] "RemoveContainer" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.882513 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} err="failed to get container status \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": rpc error: code = NotFound desc = could not find container \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": container with ID starting with afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.882537 4796 scope.go:117] "RemoveContainer" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.882824 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} err="failed to get container status \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": rpc error: code = NotFound desc = could not find container \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": container with ID starting with a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.882848 4796 scope.go:117] "RemoveContainer" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.884211 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} err="failed to get container status \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": rpc error: code = NotFound desc = could not find container \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": container with ID starting with d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.884236 4796 scope.go:117] "RemoveContainer" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.884580 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} err="failed to get container status \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": rpc error: code = NotFound desc = could not find container \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": container with ID starting with 674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.884605 4796 scope.go:117] "RemoveContainer" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.884828 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} err="failed to get container status \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": rpc error: code = NotFound desc = could not find container \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": container with ID starting with e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.884851 4796 scope.go:117] "RemoveContainer" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885069 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} err="failed to get container status \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": rpc error: code = NotFound desc = could not find container \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": container with ID starting with 894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885093 4796 scope.go:117] "RemoveContainer" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885327 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} err="failed to get container status \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": rpc error: code = NotFound desc = could not find container \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": container with ID starting with c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885351 4796 scope.go:117] "RemoveContainer" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885569 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} err="failed to get container status \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": rpc error: code = NotFound desc = could not find container \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": container with ID starting with 7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885592 4796 scope.go:117] "RemoveContainer" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885805 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} err="failed to get container status \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": rpc error: code = NotFound desc = could not find container \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": container with ID starting with 785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.885828 4796 scope.go:117] "RemoveContainer" containerID="1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886045 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540"} err="failed to get container status \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": rpc error: code = NotFound desc = could not find container \"1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540\": container with ID starting with 1c694edf43ec4f9623bc835fbd7f4b8319a43112b0bfed6778ab8ee7998ea540 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886073 4796 scope.go:117] "RemoveContainer" containerID="afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886307 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114"} err="failed to get container status \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": rpc error: code = NotFound desc = could not find container \"afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114\": container with ID starting with afb33b30c438f5e51068ffa7d2ead17cbda2aa056affacf0b0e3aee4dd2d7114 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886333 4796 scope.go:117] "RemoveContainer" containerID="a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886557 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06"} err="failed to get container status \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": rpc error: code = NotFound desc = could not find container \"a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06\": container with ID starting with a228479044b3a9dd2da5e7860eef954f4f3c7d0e8a84bafd08b355d7f5b5df06 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886581 4796 scope.go:117] "RemoveContainer" containerID="d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886815 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471"} err="failed to get container status \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": rpc error: code = NotFound desc = could not find container \"d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471\": container with ID starting with d36e74b86662084e20e9284104cb7c3d24d78fde859c3ce6e2bdd8c314c90471 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.886837 4796 scope.go:117] "RemoveContainer" containerID="674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887077 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184"} err="failed to get container status \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": rpc error: code = NotFound desc = could not find container \"674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184\": container with ID starting with 674d2a613916e008d029979dfb2ed784b7c3464f3bd03ee78c843765ab182184 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887103 4796 scope.go:117] "RemoveContainer" containerID="e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887354 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c"} err="failed to get container status \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": rpc error: code = NotFound desc = could not find container \"e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c\": container with ID starting with e2f8669e9ae4beb113e414dd7ebccd00952f0571082e1a636444a61ed0c7177c not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887379 4796 scope.go:117] "RemoveContainer" containerID="894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887615 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e"} err="failed to get container status \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": rpc error: code = NotFound desc = could not find container \"894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e\": container with ID starting with 894b34bd6d9eee420ee24b889225cb4e0548e54aee7962dc06b2f4761603aa0e not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887639 4796 scope.go:117] "RemoveContainer" containerID="c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887919 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2"} err="failed to get container status \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": rpc error: code = NotFound desc = could not find container \"c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2\": container with ID starting with c5d48de78fea33c86ad02bcdcf305af665c4b77e9f3d6465c8b3804a391dbbd2 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.887943 4796 scope.go:117] "RemoveContainer" containerID="7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.888422 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37"} err="failed to get container status \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": rpc error: code = NotFound desc = could not find container \"7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37\": container with ID starting with 7d5f5f57599aa3739399b261ecdba0a1d3cfab7a8a8f573bbee04649d49e6f37 not found: ID does not exist" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.888473 4796 scope.go:117] "RemoveContainer" containerID="785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120" Dec 12 04:45:08 crc kubenswrapper[4796]: I1212 04:45:08.889161 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120"} err="failed to get container status \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": rpc error: code = NotFound desc = could not find container \"785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120\": container with ID starting with 785f7127bcd11792e261221a39ee8e51a974861428a170445ea4ffb38b358120 not found: ID does not exist" Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.420195 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439475ac-7f06-4a47-9a81-9f4cf4083c38" path="/var/lib/kubelet/pods/439475ac-7f06-4a47-9a81-9f4cf4083c38/volumes" Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.686190 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"bfdb9e11a62bd0ae343bf2f6e64b4c0075ebe633933e608636e2839b58d7d226"} Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.686237 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"eac399a95a3a6e5711622331798603d3619705a058f2ad0a39654dcb4087c77a"} Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.686251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"4c323012a03f6bc822ae962f6207c85c46b7025544cd493fda96649bab92d077"} Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.686262 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"4bc893f8d9cb7dbd9608cfd26b9ada031177b9c19368b9ce122512a04d32394e"} Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.686302 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"7b029486abfee8060c651ef7e518eba6ec393e46d41a95286d56e5be062de7cd"} Dec 12 04:45:09 crc kubenswrapper[4796]: I1212 04:45:09.686315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"aff990b3864d2c3bc9370838ba8a10981b1430478f5e0c79877c8fd236e767c1"} Dec 12 04:45:11 crc kubenswrapper[4796]: I1212 04:45:11.702066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"bc00b21c0fd976dd34e880dd2de152f6376068ceec7489aa1537dce2b76a4030"} Dec 12 04:45:14 crc kubenswrapper[4796]: I1212 04:45:14.726739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" event={"ID":"4d24ca65-ecb9-4dfd-8feb-63cfc30a871e","Type":"ContainerStarted","Data":"d48791a9ee7a6bf2af5e91ec9f5aa0188a2136ef6c56b45cedc7b173b7a38b08"} Dec 12 04:45:14 crc kubenswrapper[4796]: I1212 04:45:14.727391 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:14 crc kubenswrapper[4796]: I1212 04:45:14.727414 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:14 crc kubenswrapper[4796]: I1212 04:45:14.750975 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:14 crc kubenswrapper[4796]: I1212 04:45:14.754909 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" podStartSLOduration=6.75489894 podStartE2EDuration="6.75489894s" podCreationTimestamp="2025-12-12 04:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:45:14.751027999 +0000 UTC m=+705.627045146" watchObservedRunningTime="2025-12-12 04:45:14.75489894 +0000 UTC m=+705.630916087" Dec 12 04:45:15 crc kubenswrapper[4796]: I1212 04:45:15.732316 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:15 crc kubenswrapper[4796]: I1212 04:45:15.767310 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:21 crc kubenswrapper[4796]: I1212 04:45:21.411755 4796 scope.go:117] "RemoveContainer" containerID="414a02d2d1b8c6bef9995acc8d6d8a11fd7a85b8235740c990a18fd12c22fdb3" Dec 12 04:45:21 crc kubenswrapper[4796]: E1212 04:45:21.412569 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b68x4_openshift-multus(55b96fce-0e56-40cb-ab90-873a8421260b)\"" pod="openshift-multus/multus-b68x4" podUID="55b96fce-0e56-40cb-ab90-873a8421260b" Dec 12 04:45:29 crc kubenswrapper[4796]: I1212 04:45:29.765348 4796 scope.go:117] "RemoveContainer" containerID="503beb04373f595a2a30a69c0c0f34281991839363f4b0bb0d95dddaecd9f1bd" Dec 12 04:45:29 crc kubenswrapper[4796]: I1212 04:45:29.810934 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/2.log" Dec 12 04:45:35 crc kubenswrapper[4796]: I1212 04:45:35.411894 4796 scope.go:117] "RemoveContainer" containerID="414a02d2d1b8c6bef9995acc8d6d8a11fd7a85b8235740c990a18fd12c22fdb3" Dec 12 04:45:35 crc kubenswrapper[4796]: I1212 04:45:35.855030 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b68x4_55b96fce-0e56-40cb-ab90-873a8421260b/kube-multus/2.log" Dec 12 04:45:35 crc kubenswrapper[4796]: I1212 04:45:35.855507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b68x4" event={"ID":"55b96fce-0e56-40cb-ab90-873a8421260b","Type":"ContainerStarted","Data":"71b804768a93b88ab9066fc4ca202f0581f5c2b2d51c7d5383d59bfe8d521660"} Dec 12 04:45:38 crc kubenswrapper[4796]: I1212 04:45:38.369249 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4jkcb" Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.898995 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw"] Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.900442 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.902564 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.911324 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw"] Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.973817 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.974085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mskb\" (UniqueName: \"kubernetes.io/projected/1fff455f-8742-424b-96a4-32f9ddac34f7-kube-api-access-9mskb\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:39 crc kubenswrapper[4796]: I1212 04:45:39.974215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.075051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.075133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mskb\" (UniqueName: \"kubernetes.io/projected/1fff455f-8742-424b-96a4-32f9ddac34f7-kube-api-access-9mskb\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.075184 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.075723 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.075719 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.098823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mskb\" (UniqueName: \"kubernetes.io/projected/1fff455f-8742-424b-96a4-32f9ddac34f7-kube-api-access-9mskb\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.218726 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.634065 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw"] Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.887132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" event={"ID":"1fff455f-8742-424b-96a4-32f9ddac34f7","Type":"ContainerStarted","Data":"b6a797d9d2333303c363e91555b079a9bfef2e05afbcfc4e6885f04560c1d803"} Dec 12 04:45:40 crc kubenswrapper[4796]: I1212 04:45:40.887174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" event={"ID":"1fff455f-8742-424b-96a4-32f9ddac34f7","Type":"ContainerStarted","Data":"4d84f3b4fbd735557c15123cd89aa2d3ffbdc2b39b97b726c987f98377912647"} Dec 12 04:45:42 crc kubenswrapper[4796]: I1212 04:45:42.898438 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerID="b6a797d9d2333303c363e91555b079a9bfef2e05afbcfc4e6885f04560c1d803" exitCode=0 Dec 12 04:45:42 crc kubenswrapper[4796]: I1212 04:45:42.898510 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" event={"ID":"1fff455f-8742-424b-96a4-32f9ddac34f7","Type":"ContainerDied","Data":"b6a797d9d2333303c363e91555b079a9bfef2e05afbcfc4e6885f04560c1d803"} Dec 12 04:45:44 crc kubenswrapper[4796]: I1212 04:45:44.917384 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerID="f61430b3289c2c5cf261a86795d65be29adabad5db07d37d7a92d4b550775ea3" exitCode=0 Dec 12 04:45:44 crc kubenswrapper[4796]: I1212 04:45:44.917527 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" event={"ID":"1fff455f-8742-424b-96a4-32f9ddac34f7","Type":"ContainerDied","Data":"f61430b3289c2c5cf261a86795d65be29adabad5db07d37d7a92d4b550775ea3"} Dec 12 04:45:45 crc kubenswrapper[4796]: I1212 04:45:45.922676 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerID="e65112a82321705d42faead110dfb6969bd6daf8c5918bb2052ec77fd61fbfb5" exitCode=0 Dec 12 04:45:45 crc kubenswrapper[4796]: I1212 04:45:45.922726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" event={"ID":"1fff455f-8742-424b-96a4-32f9ddac34f7","Type":"ContainerDied","Data":"e65112a82321705d42faead110dfb6969bd6daf8c5918bb2052ec77fd61fbfb5"} Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.186511 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.273086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-bundle\") pod \"1fff455f-8742-424b-96a4-32f9ddac34f7\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.273206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mskb\" (UniqueName: \"kubernetes.io/projected/1fff455f-8742-424b-96a4-32f9ddac34f7-kube-api-access-9mskb\") pod \"1fff455f-8742-424b-96a4-32f9ddac34f7\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.273321 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-util\") pod \"1fff455f-8742-424b-96a4-32f9ddac34f7\" (UID: \"1fff455f-8742-424b-96a4-32f9ddac34f7\") " Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.273738 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-bundle" (OuterVolumeSpecName: "bundle") pod "1fff455f-8742-424b-96a4-32f9ddac34f7" (UID: "1fff455f-8742-424b-96a4-32f9ddac34f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.277948 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fff455f-8742-424b-96a4-32f9ddac34f7-kube-api-access-9mskb" (OuterVolumeSpecName: "kube-api-access-9mskb") pod "1fff455f-8742-424b-96a4-32f9ddac34f7" (UID: "1fff455f-8742-424b-96a4-32f9ddac34f7"). InnerVolumeSpecName "kube-api-access-9mskb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.283921 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-util" (OuterVolumeSpecName: "util") pod "1fff455f-8742-424b-96a4-32f9ddac34f7" (UID: "1fff455f-8742-424b-96a4-32f9ddac34f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.374598 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-util\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.374625 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fff455f-8742-424b-96a4-32f9ddac34f7-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.374634 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mskb\" (UniqueName: \"kubernetes.io/projected/1fff455f-8742-424b-96a4-32f9ddac34f7-kube-api-access-9mskb\") on node \"crc\" DevicePath \"\"" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.938512 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" event={"ID":"1fff455f-8742-424b-96a4-32f9ddac34f7","Type":"ContainerDied","Data":"4d84f3b4fbd735557c15123cd89aa2d3ffbdc2b39b97b726c987f98377912647"} Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.938852 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d84f3b4fbd735557c15123cd89aa2d3ffbdc2b39b97b726c987f98377912647" Dec 12 04:45:47 crc kubenswrapper[4796]: I1212 04:45:47.938595 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.968260 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-w2zw4"] Dec 12 04:45:49 crc kubenswrapper[4796]: E1212 04:45:49.968684 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="util" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.968696 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="util" Dec 12 04:45:49 crc kubenswrapper[4796]: E1212 04:45:49.968710 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="pull" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.968716 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="pull" Dec 12 04:45:49 crc kubenswrapper[4796]: E1212 04:45:49.968727 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="extract" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.968734 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="extract" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.968823 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fff455f-8742-424b-96a4-32f9ddac34f7" containerName="extract" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.969139 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.970779 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.973710 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.976314 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-h7npn" Dec 12 04:45:49 crc kubenswrapper[4796]: I1212 04:45:49.983823 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-w2zw4"] Dec 12 04:45:50 crc kubenswrapper[4796]: I1212 04:45:50.003559 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st9wv\" (UniqueName: \"kubernetes.io/projected/3637ba14-6803-4897-9b95-09119916eaa5-kube-api-access-st9wv\") pod \"nmstate-operator-6769fb99d-w2zw4\" (UID: \"3637ba14-6803-4897-9b95-09119916eaa5\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" Dec 12 04:45:50 crc kubenswrapper[4796]: I1212 04:45:50.104311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st9wv\" (UniqueName: \"kubernetes.io/projected/3637ba14-6803-4897-9b95-09119916eaa5-kube-api-access-st9wv\") pod \"nmstate-operator-6769fb99d-w2zw4\" (UID: \"3637ba14-6803-4897-9b95-09119916eaa5\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" Dec 12 04:45:50 crc kubenswrapper[4796]: I1212 04:45:50.127246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st9wv\" (UniqueName: \"kubernetes.io/projected/3637ba14-6803-4897-9b95-09119916eaa5-kube-api-access-st9wv\") pod \"nmstate-operator-6769fb99d-w2zw4\" (UID: \"3637ba14-6803-4897-9b95-09119916eaa5\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" Dec 12 04:45:50 crc kubenswrapper[4796]: I1212 04:45:50.282713 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" Dec 12 04:45:50 crc kubenswrapper[4796]: I1212 04:45:50.697753 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-w2zw4"] Dec 12 04:45:50 crc kubenswrapper[4796]: I1212 04:45:50.953696 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" event={"ID":"3637ba14-6803-4897-9b95-09119916eaa5","Type":"ContainerStarted","Data":"772bc87d090fe4bc83028a1c972b8d42b7b358976c519eab5d54d7901901e3ab"} Dec 12 04:45:53 crc kubenswrapper[4796]: I1212 04:45:53.975339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" event={"ID":"3637ba14-6803-4897-9b95-09119916eaa5","Type":"ContainerStarted","Data":"0309fcd6a9d8f227a5053bca383b91a8de8fd69c6db711b7b987cc1d72afab51"} Dec 12 04:45:53 crc kubenswrapper[4796]: I1212 04:45:53.997874 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-w2zw4" podStartSLOduration=2.097842687 podStartE2EDuration="4.997851749s" podCreationTimestamp="2025-12-12 04:45:49 +0000 UTC" firstStartedPulling="2025-12-12 04:45:50.699134564 +0000 UTC m=+741.575151711" lastFinishedPulling="2025-12-12 04:45:53.599143626 +0000 UTC m=+744.475160773" observedRunningTime="2025-12-12 04:45:53.990477061 +0000 UTC m=+744.866494228" watchObservedRunningTime="2025-12-12 04:45:53.997851749 +0000 UTC m=+744.873868896" Dec 12 04:45:59 crc kubenswrapper[4796]: I1212 04:45:59.994847 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv"] Dec 12 04:45:59 crc kubenswrapper[4796]: I1212 04:45:59.996343 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.001474 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-cmqr4" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.016209 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-bd782"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.017030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.024506 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.028258 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.040423 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-bd782"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.045076 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmr2r\" (UniqueName: \"kubernetes.io/projected/c918c9c3-c2d3-415d-9942-16385200a014-kube-api-access-dmr2r\") pod \"nmstate-metrics-7f7f7578db-qrqkv\" (UID: \"c918c9c3-c2d3-415d-9942-16385200a014\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.045188 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.045219 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcss2\" (UniqueName: \"kubernetes.io/projected/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-kube-api-access-fcss2\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.050302 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lc8l9"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.051131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.145943 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8nhs\" (UniqueName: \"kubernetes.io/projected/284c3da0-54ab-47f6-960d-063c58c0f870-kube-api-access-k8nhs\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.146222 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-ovs-socket\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.146399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-nmstate-lock\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.146485 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-dbus-socket\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.146571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.146670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcss2\" (UniqueName: \"kubernetes.io/projected/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-kube-api-access-fcss2\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: E1212 04:46:00.146742 4796 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 12 04:46:00 crc kubenswrapper[4796]: E1212 04:46:00.146819 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair podName:3d70cee4-8e4a-49fe-a0c1-e26a7452ba32 nodeName:}" failed. No retries permitted until 2025-12-12 04:46:00.646800231 +0000 UTC m=+751.522817368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair") pod "nmstate-webhook-f8fb84555-bd782" (UID: "3d70cee4-8e4a-49fe-a0c1-e26a7452ba32") : secret "openshift-nmstate-webhook" not found Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.146751 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmr2r\" (UniqueName: \"kubernetes.io/projected/c918c9c3-c2d3-415d-9942-16385200a014-kube-api-access-dmr2r\") pod \"nmstate-metrics-7f7f7578db-qrqkv\" (UID: \"c918c9c3-c2d3-415d-9942-16385200a014\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.159735 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.160568 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.162901 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.163002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-9m85x" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.167562 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.173025 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmr2r\" (UniqueName: \"kubernetes.io/projected/c918c9c3-c2d3-415d-9942-16385200a014-kube-api-access-dmr2r\") pod \"nmstate-metrics-7f7f7578db-qrqkv\" (UID: \"c918c9c3-c2d3-415d-9942-16385200a014\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.183296 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.189851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcss2\" (UniqueName: \"kubernetes.io/projected/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-kube-api-access-fcss2\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.250834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1e50c4c-9467-4663-a305-6077b4dc5b1d-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.250888 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-ovs-socket\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.250919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1e50c4c-9467-4663-a305-6077b4dc5b1d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.250940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndbl\" (UniqueName: \"kubernetes.io/projected/f1e50c4c-9467-4663-a305-6077b4dc5b1d-kube-api-access-6ndbl\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.250972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-nmstate-lock\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.250995 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-dbus-socket\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.251066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8nhs\" (UniqueName: \"kubernetes.io/projected/284c3da0-54ab-47f6-960d-063c58c0f870-kube-api-access-k8nhs\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.251374 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-nmstate-lock\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.251478 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-ovs-socket\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.251803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/284c3da0-54ab-47f6-960d-063c58c0f870-dbus-socket\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.279028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8nhs\" (UniqueName: \"kubernetes.io/projected/284c3da0-54ab-47f6-960d-063c58c0f870-kube-api-access-k8nhs\") pod \"nmstate-handler-lc8l9\" (UID: \"284c3da0-54ab-47f6-960d-063c58c0f870\") " pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.312511 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.352029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1e50c4c-9467-4663-a305-6077b4dc5b1d-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.352074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1e50c4c-9467-4663-a305-6077b4dc5b1d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.352098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ndbl\" (UniqueName: \"kubernetes.io/projected/f1e50c4c-9467-4663-a305-6077b4dc5b1d-kube-api-access-6ndbl\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.353120 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1e50c4c-9467-4663-a305-6077b4dc5b1d-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: E1212 04:46:00.353185 4796 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 12 04:46:00 crc kubenswrapper[4796]: E1212 04:46:00.353218 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1e50c4c-9467-4663-a305-6077b4dc5b1d-plugin-serving-cert podName:f1e50c4c-9467-4663-a305-6077b4dc5b1d nodeName:}" failed. No retries permitted until 2025-12-12 04:46:00.85320648 +0000 UTC m=+751.729223627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/f1e50c4c-9467-4663-a305-6077b4dc5b1d-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-jzd58" (UID: "f1e50c4c-9467-4663-a305-6077b4dc5b1d") : secret "plugin-serving-cert" not found Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.371595 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.379227 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ndbl\" (UniqueName: \"kubernetes.io/projected/f1e50c4c-9467-4663-a305-6077b4dc5b1d-kube-api-access-6ndbl\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: W1212 04:46:00.408864 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284c3da0_54ab_47f6_960d_063c58c0f870.slice/crio-365578ff0444a73b820167c6fc6d12a88ebeb87eb792964ace95fe7e60f8e41e WatchSource:0}: Error finding container 365578ff0444a73b820167c6fc6d12a88ebeb87eb792964ace95fe7e60f8e41e: Status 404 returned error can't find the container with id 365578ff0444a73b820167c6fc6d12a88ebeb87eb792964ace95fe7e60f8e41e Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.480406 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7699df-cpp58"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.481537 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.485363 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7699df-cpp58"] Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.564954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-oauth-config\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.565322 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-service-ca\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.565402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9v7\" (UniqueName: \"kubernetes.io/projected/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-kube-api-access-2x9v7\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.565529 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-oauth-serving-cert\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.565635 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-serving-cert\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.565712 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-config\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.565759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-trusted-ca-bundle\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.628633 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv"] Dec 12 04:46:00 crc kubenswrapper[4796]: W1212 04:46:00.642110 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc918c9c3_c2d3_415d_9942_16385200a014.slice/crio-9a021ea68f4301a0a21ab5bdabb04ee6e0460100103ca9a8aea16ae0866f210d WatchSource:0}: Error finding container 9a021ea68f4301a0a21ab5bdabb04ee6e0460100103ca9a8aea16ae0866f210d: Status 404 returned error can't find the container with id 9a021ea68f4301a0a21ab5bdabb04ee6e0460100103ca9a8aea16ae0866f210d Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666795 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-service-ca\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x9v7\" (UniqueName: \"kubernetes.io/projected/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-kube-api-access-2x9v7\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666874 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-oauth-serving-cert\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666899 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-serving-cert\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666935 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-config\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666957 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-trusted-ca-bundle\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.666980 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.667004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-oauth-config\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.668160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-service-ca\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.668251 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-config\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.668569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-oauth-serving-cert\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: E1212 04:46:00.668730 4796 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 12 04:46:00 crc kubenswrapper[4796]: E1212 04:46:00.668792 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair podName:3d70cee4-8e4a-49fe-a0c1-e26a7452ba32 nodeName:}" failed. No retries permitted until 2025-12-12 04:46:01.668774655 +0000 UTC m=+752.544791882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair") pod "nmstate-webhook-f8fb84555-bd782" (UID: "3d70cee4-8e4a-49fe-a0c1-e26a7452ba32") : secret "openshift-nmstate-webhook" not found Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.669061 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-trusted-ca-bundle\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.672224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-oauth-config\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.676021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-console-serving-cert\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.685065 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x9v7\" (UniqueName: \"kubernetes.io/projected/dc2a0930-b6f8-471d-b452-a930f0b2c2e2-kube-api-access-2x9v7\") pod \"console-f9d7699df-cpp58\" (UID: \"dc2a0930-b6f8-471d-b452-a930f0b2c2e2\") " pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.810673 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.869235 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1e50c4c-9467-4663-a305-6077b4dc5b1d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:00 crc kubenswrapper[4796]: I1212 04:46:00.872927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1e50c4c-9467-4663-a305-6077b4dc5b1d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-jzd58\" (UID: \"f1e50c4c-9467-4663-a305-6077b4dc5b1d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.012746 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7699df-cpp58"] Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.015886 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lc8l9" event={"ID":"284c3da0-54ab-47f6-960d-063c58c0f870","Type":"ContainerStarted","Data":"365578ff0444a73b820167c6fc6d12a88ebeb87eb792964ace95fe7e60f8e41e"} Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.017766 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" event={"ID":"c918c9c3-c2d3-415d-9942-16385200a014","Type":"ContainerStarted","Data":"9a021ea68f4301a0a21ab5bdabb04ee6e0460100103ca9a8aea16ae0866f210d"} Dec 12 04:46:01 crc kubenswrapper[4796]: W1212 04:46:01.023167 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc2a0930_b6f8_471d_b452_a930f0b2c2e2.slice/crio-c0afc097ecefe3489a4d753f59c0981d289cd4175f2e86882e9348df4a8b5c5f WatchSource:0}: Error finding container c0afc097ecefe3489a4d753f59c0981d289cd4175f2e86882e9348df4a8b5c5f: Status 404 returned error can't find the container with id c0afc097ecefe3489a4d753f59c0981d289cd4175f2e86882e9348df4a8b5c5f Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.074226 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.260293 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58"] Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.681239 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.686946 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3d70cee4-8e4a-49fe-a0c1-e26a7452ba32-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-bd782\" (UID: \"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:01 crc kubenswrapper[4796]: I1212 04:46:01.832459 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:02 crc kubenswrapper[4796]: I1212 04:46:02.028300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7699df-cpp58" event={"ID":"dc2a0930-b6f8-471d-b452-a930f0b2c2e2","Type":"ContainerStarted","Data":"92cb0bf08640647dbdac92767afd0853a5f5eb0240e99f433b56683d251755d0"} Dec 12 04:46:02 crc kubenswrapper[4796]: I1212 04:46:02.028683 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7699df-cpp58" event={"ID":"dc2a0930-b6f8-471d-b452-a930f0b2c2e2","Type":"ContainerStarted","Data":"c0afc097ecefe3489a4d753f59c0981d289cd4175f2e86882e9348df4a8b5c5f"} Dec 12 04:46:02 crc kubenswrapper[4796]: I1212 04:46:02.029532 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-bd782"] Dec 12 04:46:02 crc kubenswrapper[4796]: I1212 04:46:02.032669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" event={"ID":"f1e50c4c-9467-4663-a305-6077b4dc5b1d","Type":"ContainerStarted","Data":"921d19aff70f76ac32e54fd37e0b0f412c75802a1e2d2749ec4650b2e96362d3"} Dec 12 04:46:02 crc kubenswrapper[4796]: W1212 04:46:02.035097 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d70cee4_8e4a_49fe_a0c1_e26a7452ba32.slice/crio-a6b0edd35734f088f483c76ecbc779bef7b6e3a9bd13ce09ec42f7b45b484cc0 WatchSource:0}: Error finding container a6b0edd35734f088f483c76ecbc779bef7b6e3a9bd13ce09ec42f7b45b484cc0: Status 404 returned error can't find the container with id a6b0edd35734f088f483c76ecbc779bef7b6e3a9bd13ce09ec42f7b45b484cc0 Dec 12 04:46:02 crc kubenswrapper[4796]: I1212 04:46:02.053498 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7699df-cpp58" podStartSLOduration=2.053472117 podStartE2EDuration="2.053472117s" podCreationTimestamp="2025-12-12 04:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:46:02.047413232 +0000 UTC m=+752.923430389" watchObservedRunningTime="2025-12-12 04:46:02.053472117 +0000 UTC m=+752.929489264" Dec 12 04:46:02 crc kubenswrapper[4796]: I1212 04:46:02.631058 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 04:46:03 crc kubenswrapper[4796]: I1212 04:46:03.040403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" event={"ID":"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32","Type":"ContainerStarted","Data":"a6b0edd35734f088f483c76ecbc779bef7b6e3a9bd13ce09ec42f7b45b484cc0"} Dec 12 04:46:04 crc kubenswrapper[4796]: I1212 04:46:04.052266 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" event={"ID":"c918c9c3-c2d3-415d-9942-16385200a014","Type":"ContainerStarted","Data":"961399eb7820d95219d385cd3a3de3f0121a5764bf36550fb688f9fb274ce6cf"} Dec 12 04:46:04 crc kubenswrapper[4796]: I1212 04:46:04.054415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lc8l9" event={"ID":"284c3da0-54ab-47f6-960d-063c58c0f870","Type":"ContainerStarted","Data":"47b33b6f0e59b8a84675e16bd720d5d48720d09b8cd4e8939b5ffbd347caf653"} Dec 12 04:46:04 crc kubenswrapper[4796]: I1212 04:46:04.058482 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" event={"ID":"3d70cee4-8e4a-49fe-a0c1-e26a7452ba32","Type":"ContainerStarted","Data":"320f3c70fce19ed35bfed703e9ca7b4135cfda89878902df1ce47f8f1923040d"} Dec 12 04:46:04 crc kubenswrapper[4796]: I1212 04:46:04.059186 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:04 crc kubenswrapper[4796]: I1212 04:46:04.070602 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lc8l9" podStartSLOduration=1.509418223 podStartE2EDuration="4.07058588s" podCreationTimestamp="2025-12-12 04:46:00 +0000 UTC" firstStartedPulling="2025-12-12 04:46:00.450874616 +0000 UTC m=+751.326891763" lastFinishedPulling="2025-12-12 04:46:03.012042273 +0000 UTC m=+753.888059420" observedRunningTime="2025-12-12 04:46:04.067906894 +0000 UTC m=+754.943924041" watchObservedRunningTime="2025-12-12 04:46:04.07058588 +0000 UTC m=+754.946603027" Dec 12 04:46:04 crc kubenswrapper[4796]: I1212 04:46:04.085488 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" podStartSLOduration=4.107970764 podStartE2EDuration="5.08547366s" podCreationTimestamp="2025-12-12 04:45:59 +0000 UTC" firstStartedPulling="2025-12-12 04:46:02.037641677 +0000 UTC m=+752.913658824" lastFinishedPulling="2025-12-12 04:46:03.015144573 +0000 UTC m=+753.891161720" observedRunningTime="2025-12-12 04:46:04.080272362 +0000 UTC m=+754.956289509" watchObservedRunningTime="2025-12-12 04:46:04.08547366 +0000 UTC m=+754.961490797" Dec 12 04:46:05 crc kubenswrapper[4796]: I1212 04:46:05.067383 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" event={"ID":"f1e50c4c-9467-4663-a305-6077b4dc5b1d","Type":"ContainerStarted","Data":"c7f7e159cd57e42272344523af90973941629af142b4a66807d9062c981d80a0"} Dec 12 04:46:05 crc kubenswrapper[4796]: I1212 04:46:05.067783 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:05 crc kubenswrapper[4796]: I1212 04:46:05.096127 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-jzd58" podStartSLOduration=2.215988633 podStartE2EDuration="5.096103233s" podCreationTimestamp="2025-12-12 04:46:00 +0000 UTC" firstStartedPulling="2025-12-12 04:46:01.284799858 +0000 UTC m=+752.160817005" lastFinishedPulling="2025-12-12 04:46:04.164914458 +0000 UTC m=+755.040931605" observedRunningTime="2025-12-12 04:46:05.085993007 +0000 UTC m=+755.962010154" watchObservedRunningTime="2025-12-12 04:46:05.096103233 +0000 UTC m=+755.972120410" Dec 12 04:46:06 crc kubenswrapper[4796]: I1212 04:46:06.072742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" event={"ID":"c918c9c3-c2d3-415d-9942-16385200a014","Type":"ContainerStarted","Data":"9ee288d3d7d2ae5ad29b43f5350b6d9a5b4cb27784a19cd6e17d35bfc9ea5e9d"} Dec 12 04:46:06 crc kubenswrapper[4796]: I1212 04:46:06.093305 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-qrqkv" podStartSLOduration=2.403320564 podStartE2EDuration="7.093288872s" podCreationTimestamp="2025-12-12 04:45:59 +0000 UTC" firstStartedPulling="2025-12-12 04:46:00.643889723 +0000 UTC m=+751.519906870" lastFinishedPulling="2025-12-12 04:46:05.333858041 +0000 UTC m=+756.209875178" observedRunningTime="2025-12-12 04:46:06.091422313 +0000 UTC m=+756.967439460" watchObservedRunningTime="2025-12-12 04:46:06.093288872 +0000 UTC m=+756.969306019" Dec 12 04:46:10 crc kubenswrapper[4796]: I1212 04:46:10.406468 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lc8l9" Dec 12 04:46:10 crc kubenswrapper[4796]: I1212 04:46:10.810802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:10 crc kubenswrapper[4796]: I1212 04:46:10.811263 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:10 crc kubenswrapper[4796]: I1212 04:46:10.816981 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:11 crc kubenswrapper[4796]: I1212 04:46:11.111444 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7699df-cpp58" Dec 12 04:46:11 crc kubenswrapper[4796]: I1212 04:46:11.162205 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4tvxf"] Dec 12 04:46:21 crc kubenswrapper[4796]: I1212 04:46:21.842002 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-bd782" Dec 12 04:46:32 crc kubenswrapper[4796]: I1212 04:46:32.969857 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:46:32 crc kubenswrapper[4796]: I1212 04:46:32.970680 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.464427 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf"] Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.465922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.468311 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.477594 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf"] Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.530325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcbn\" (UniqueName: \"kubernetes.io/projected/52b4af6b-5192-4766-8b22-d099bb744669-kube-api-access-hgcbn\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.530400 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.530441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.631607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgcbn\" (UniqueName: \"kubernetes.io/projected/52b4af6b-5192-4766-8b22-d099bb744669-kube-api-access-hgcbn\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.631670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.631713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.632072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.632383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.655149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgcbn\" (UniqueName: \"kubernetes.io/projected/52b4af6b-5192-4766-8b22-d099bb744669-kube-api-access-hgcbn\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:33 crc kubenswrapper[4796]: I1212 04:46:33.782561 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:34 crc kubenswrapper[4796]: I1212 04:46:34.176110 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf"] Dec 12 04:46:34 crc kubenswrapper[4796]: I1212 04:46:34.395626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" event={"ID":"52b4af6b-5192-4766-8b22-d099bb744669","Type":"ContainerStarted","Data":"c7978f3a344e2c537db38001a81db0e7effd496a630f406bec9bdff6df1538aa"} Dec 12 04:46:34 crc kubenswrapper[4796]: I1212 04:46:34.395672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" event={"ID":"52b4af6b-5192-4766-8b22-d099bb744669","Type":"ContainerStarted","Data":"8550c0988ea1232f69ae0225dfd8d2163b874d34815d2e72b8eb08a98d057a51"} Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.401886 4796 generic.go:334] "Generic (PLEG): container finished" podID="52b4af6b-5192-4766-8b22-d099bb744669" containerID="c7978f3a344e2c537db38001a81db0e7effd496a630f406bec9bdff6df1538aa" exitCode=0 Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.401989 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" event={"ID":"52b4af6b-5192-4766-8b22-d099bb744669","Type":"ContainerDied","Data":"c7978f3a344e2c537db38001a81db0e7effd496a630f406bec9bdff6df1538aa"} Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.585350 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nbx9l"] Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.586758 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.592534 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nbx9l"] Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.656667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-catalog-content\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.656781 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-utilities\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.656837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6r48\" (UniqueName: \"kubernetes.io/projected/290aeef4-2d46-418e-8752-858942ad09dd-kube-api-access-j6r48\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.758290 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6r48\" (UniqueName: \"kubernetes.io/projected/290aeef4-2d46-418e-8752-858942ad09dd-kube-api-access-j6r48\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.758364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-catalog-content\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.758424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-utilities\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.758838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-utilities\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.758905 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-catalog-content\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.782849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6r48\" (UniqueName: \"kubernetes.io/projected/290aeef4-2d46-418e-8752-858942ad09dd-kube-api-access-j6r48\") pod \"redhat-operators-nbx9l\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:35 crc kubenswrapper[4796]: I1212 04:46:35.919469 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.212383 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4tvxf" podUID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" containerName="console" containerID="cri-o://637c27af1ad4d922b7cd59a6b03c79d323a4820735d835602701326e3839a8a5" gracePeriod=15 Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.311827 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nbx9l"] Dec 12 04:46:36 crc kubenswrapper[4796]: W1212 04:46:36.319519 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod290aeef4_2d46_418e_8752_858942ad09dd.slice/crio-e2b9daa6d018ac4e948f977efd1dddb9e5ae4843d01a5393d4d5b2f8a1bf53a1 WatchSource:0}: Error finding container e2b9daa6d018ac4e948f977efd1dddb9e5ae4843d01a5393d4d5b2f8a1bf53a1: Status 404 returned error can't find the container with id e2b9daa6d018ac4e948f977efd1dddb9e5ae4843d01a5393d4d5b2f8a1bf53a1 Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.408554 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerStarted","Data":"e2b9daa6d018ac4e948f977efd1dddb9e5ae4843d01a5393d4d5b2f8a1bf53a1"} Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.410452 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4tvxf_00bcefc1-0041-4c8e-836f-f1abaa3eb344/console/0.log" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.410494 4796 generic.go:334] "Generic (PLEG): container finished" podID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" containerID="637c27af1ad4d922b7cd59a6b03c79d323a4820735d835602701326e3839a8a5" exitCode=2 Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.410523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4tvxf" event={"ID":"00bcefc1-0041-4c8e-836f-f1abaa3eb344","Type":"ContainerDied","Data":"637c27af1ad4d922b7cd59a6b03c79d323a4820735d835602701326e3839a8a5"} Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.552933 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4tvxf_00bcefc1-0041-4c8e-836f-f1abaa3eb344/console/0.log" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.552992 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-serving-cert\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573658 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-oauth-serving-cert\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573703 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-oauth-config\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grkjj\" (UniqueName: \"kubernetes.io/projected/00bcefc1-0041-4c8e-836f-f1abaa3eb344-kube-api-access-grkjj\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573758 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-service-ca\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-trusted-ca-bundle\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.573824 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-config\") pod \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\" (UID: \"00bcefc1-0041-4c8e-836f-f1abaa3eb344\") " Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.575756 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.575795 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-config" (OuterVolumeSpecName: "console-config") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.575861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.576114 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-service-ca" (OuterVolumeSpecName: "service-ca") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.614912 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bcefc1-0041-4c8e-836f-f1abaa3eb344-kube-api-access-grkjj" (OuterVolumeSpecName: "kube-api-access-grkjj") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "kube-api-access-grkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.615652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.617130 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "00bcefc1-0041-4c8e-836f-f1abaa3eb344" (UID: "00bcefc1-0041-4c8e-836f-f1abaa3eb344"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.674716 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.674997 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.675077 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.675130 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grkjj\" (UniqueName: \"kubernetes.io/projected/00bcefc1-0041-4c8e-836f-f1abaa3eb344-kube-api-access-grkjj\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.675224 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.675308 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:36 crc kubenswrapper[4796]: I1212 04:46:36.675389 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00bcefc1-0041-4c8e-836f-f1abaa3eb344-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.418898 4796 generic.go:334] "Generic (PLEG): container finished" podID="52b4af6b-5192-4766-8b22-d099bb744669" containerID="390878d72b17f7dd25cbf3caca4392b676b1dd062dc8812c2e7be3bbea03a971" exitCode=0 Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.424192 4796 generic.go:334] "Generic (PLEG): container finished" podID="290aeef4-2d46-418e-8752-858942ad09dd" containerID="bb005fef34ac248813fbdf922f7a4f1f35573028ee7525c204c388e19d358ed9" exitCode=0 Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.424999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" event={"ID":"52b4af6b-5192-4766-8b22-d099bb744669","Type":"ContainerDied","Data":"390878d72b17f7dd25cbf3caca4392b676b1dd062dc8812c2e7be3bbea03a971"} Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.425036 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerDied","Data":"bb005fef34ac248813fbdf922f7a4f1f35573028ee7525c204c388e19d358ed9"} Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.431209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4tvxf_00bcefc1-0041-4c8e-836f-f1abaa3eb344/console/0.log" Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.431327 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4tvxf" event={"ID":"00bcefc1-0041-4c8e-836f-f1abaa3eb344","Type":"ContainerDied","Data":"8f5943f487008d94a4910b50583e1e8dc77ad82687a448584ee0cc8adfcc8b5b"} Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.431410 4796 scope.go:117] "RemoveContainer" containerID="637c27af1ad4d922b7cd59a6b03c79d323a4820735d835602701326e3839a8a5" Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.431437 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4tvxf" Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.509648 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4tvxf"] Dec 12 04:46:37 crc kubenswrapper[4796]: I1212 04:46:37.517796 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4tvxf"] Dec 12 04:46:38 crc kubenswrapper[4796]: I1212 04:46:38.445637 4796 generic.go:334] "Generic (PLEG): container finished" podID="52b4af6b-5192-4766-8b22-d099bb744669" containerID="d6b2c4d2709f410d5c37ce209d4ce3bb586f812e12a7dbd1682ab07b1d47943e" exitCode=0 Dec 12 04:46:38 crc kubenswrapper[4796]: I1212 04:46:38.445727 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" event={"ID":"52b4af6b-5192-4766-8b22-d099bb744669","Type":"ContainerDied","Data":"d6b2c4d2709f410d5c37ce209d4ce3bb586f812e12a7dbd1682ab07b1d47943e"} Dec 12 04:46:38 crc kubenswrapper[4796]: I1212 04:46:38.448705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerStarted","Data":"ca0c5bf04df6fb01db0a9821e9ed189790babf3cb56fc9d18cefb973683b67fe"} Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.421457 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" path="/var/lib/kubelet/pods/00bcefc1-0041-4c8e-836f-f1abaa3eb344/volumes" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.807638 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.832292 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-util\") pod \"52b4af6b-5192-4766-8b22-d099bb744669\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.832372 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-bundle\") pod \"52b4af6b-5192-4766-8b22-d099bb744669\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.832533 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgcbn\" (UniqueName: \"kubernetes.io/projected/52b4af6b-5192-4766-8b22-d099bb744669-kube-api-access-hgcbn\") pod \"52b4af6b-5192-4766-8b22-d099bb744669\" (UID: \"52b4af6b-5192-4766-8b22-d099bb744669\") " Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.836358 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-bundle" (OuterVolumeSpecName: "bundle") pod "52b4af6b-5192-4766-8b22-d099bb744669" (UID: "52b4af6b-5192-4766-8b22-d099bb744669"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.852538 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b4af6b-5192-4766-8b22-d099bb744669-kube-api-access-hgcbn" (OuterVolumeSpecName: "kube-api-access-hgcbn") pod "52b4af6b-5192-4766-8b22-d099bb744669" (UID: "52b4af6b-5192-4766-8b22-d099bb744669"). InnerVolumeSpecName "kube-api-access-hgcbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.856667 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-util" (OuterVolumeSpecName: "util") pod "52b4af6b-5192-4766-8b22-d099bb744669" (UID: "52b4af6b-5192-4766-8b22-d099bb744669"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.933880 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-util\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.933911 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b4af6b-5192-4766-8b22-d099bb744669-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:39 crc kubenswrapper[4796]: I1212 04:46:39.933920 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgcbn\" (UniqueName: \"kubernetes.io/projected/52b4af6b-5192-4766-8b22-d099bb744669-kube-api-access-hgcbn\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:40 crc kubenswrapper[4796]: I1212 04:46:40.463198 4796 generic.go:334] "Generic (PLEG): container finished" podID="290aeef4-2d46-418e-8752-858942ad09dd" containerID="ca0c5bf04df6fb01db0a9821e9ed189790babf3cb56fc9d18cefb973683b67fe" exitCode=0 Dec 12 04:46:40 crc kubenswrapper[4796]: I1212 04:46:40.463315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerDied","Data":"ca0c5bf04df6fb01db0a9821e9ed189790babf3cb56fc9d18cefb973683b67fe"} Dec 12 04:46:40 crc kubenswrapper[4796]: I1212 04:46:40.470473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" event={"ID":"52b4af6b-5192-4766-8b22-d099bb744669","Type":"ContainerDied","Data":"8550c0988ea1232f69ae0225dfd8d2163b874d34815d2e72b8eb08a98d057a51"} Dec 12 04:46:40 crc kubenswrapper[4796]: I1212 04:46:40.470506 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8550c0988ea1232f69ae0225dfd8d2163b874d34815d2e72b8eb08a98d057a51" Dec 12 04:46:40 crc kubenswrapper[4796]: I1212 04:46:40.470568 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf" Dec 12 04:46:41 crc kubenswrapper[4796]: I1212 04:46:41.478703 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerStarted","Data":"8b2b5e440f8f8084300b0132edeebfb7af6d1f4be96e967b07f084ec1d2a1249"} Dec 12 04:46:45 crc kubenswrapper[4796]: I1212 04:46:45.920244 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:45 crc kubenswrapper[4796]: I1212 04:46:45.920853 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:46 crc kubenswrapper[4796]: I1212 04:46:46.971930 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nbx9l" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="registry-server" probeResult="failure" output=< Dec 12 04:46:46 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 04:46:46 crc kubenswrapper[4796]: > Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.683368 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nbx9l" podStartSLOduration=12.145400327 podStartE2EDuration="15.68334979s" podCreationTimestamp="2025-12-12 04:46:35 +0000 UTC" firstStartedPulling="2025-12-12 04:46:37.450203874 +0000 UTC m=+788.326221031" lastFinishedPulling="2025-12-12 04:46:40.988153307 +0000 UTC m=+791.864170494" observedRunningTime="2025-12-12 04:46:41.498434326 +0000 UTC m=+792.374451493" watchObservedRunningTime="2025-12-12 04:46:50.68334979 +0000 UTC m=+801.559366937" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685099 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk"] Dec 12 04:46:50 crc kubenswrapper[4796]: E1212 04:46:50.685338 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" containerName="console" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685351 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" containerName="console" Dec 12 04:46:50 crc kubenswrapper[4796]: E1212 04:46:50.685363 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="util" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685370 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="util" Dec 12 04:46:50 crc kubenswrapper[4796]: E1212 04:46:50.685380 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="extract" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685386 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="extract" Dec 12 04:46:50 crc kubenswrapper[4796]: E1212 04:46:50.685396 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="pull" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685402 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="pull" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685499 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="00bcefc1-0041-4c8e-836f-f1abaa3eb344" containerName="console" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685510 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b4af6b-5192-4766-8b22-d099bb744669" containerName="extract" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.685893 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.708925 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.716411 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.716527 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.716848 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.721761 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-k4llr" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.755465 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk"] Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.756223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzrb\" (UniqueName: \"kubernetes.io/projected/db1474b8-5eda-4d9e-8364-21082cc5d214-kube-api-access-gtzrb\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.756304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db1474b8-5eda-4d9e-8364-21082cc5d214-webhook-cert\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.756336 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db1474b8-5eda-4d9e-8364-21082cc5d214-apiservice-cert\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.857012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db1474b8-5eda-4d9e-8364-21082cc5d214-webhook-cert\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.857051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db1474b8-5eda-4d9e-8364-21082cc5d214-apiservice-cert\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.857120 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtzrb\" (UniqueName: \"kubernetes.io/projected/db1474b8-5eda-4d9e-8364-21082cc5d214-kube-api-access-gtzrb\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.864138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db1474b8-5eda-4d9e-8364-21082cc5d214-apiservice-cert\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.880424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db1474b8-5eda-4d9e-8364-21082cc5d214-webhook-cert\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:50 crc kubenswrapper[4796]: I1212 04:46:50.888179 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtzrb\" (UniqueName: \"kubernetes.io/projected/db1474b8-5eda-4d9e-8364-21082cc5d214-kube-api-access-gtzrb\") pod \"metallb-operator-controller-manager-54b76c8dd-989lk\" (UID: \"db1474b8-5eda-4d9e-8364-21082cc5d214\") " pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.003907 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.014871 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv"] Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.015816 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.020987 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hkpkv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.026827 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.031035 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.037905 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv"] Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.061023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d65581e-d568-49dc-9be0-4e4f06ce52e4-webhook-cert\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.061514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt72\" (UniqueName: \"kubernetes.io/projected/9d65581e-d568-49dc-9be0-4e4f06ce52e4-kube-api-access-zjt72\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.061601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d65581e-d568-49dc-9be0-4e4f06ce52e4-apiservice-cert\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.165407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt72\" (UniqueName: \"kubernetes.io/projected/9d65581e-d568-49dc-9be0-4e4f06ce52e4-kube-api-access-zjt72\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.165822 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d65581e-d568-49dc-9be0-4e4f06ce52e4-apiservice-cert\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.165968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d65581e-d568-49dc-9be0-4e4f06ce52e4-webhook-cert\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.192843 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt72\" (UniqueName: \"kubernetes.io/projected/9d65581e-d568-49dc-9be0-4e4f06ce52e4-kube-api-access-zjt72\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.258195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d65581e-d568-49dc-9be0-4e4f06ce52e4-webhook-cert\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.259098 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d65581e-d568-49dc-9be0-4e4f06ce52e4-apiservice-cert\") pod \"metallb-operator-webhook-server-c7ffbcf95-nsmtv\" (UID: \"9d65581e-d568-49dc-9be0-4e4f06ce52e4\") " pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.338561 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.614461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk"] Dec 12 04:46:51 crc kubenswrapper[4796]: I1212 04:46:51.801893 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv"] Dec 12 04:46:52 crc kubenswrapper[4796]: I1212 04:46:52.531723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" event={"ID":"db1474b8-5eda-4d9e-8364-21082cc5d214","Type":"ContainerStarted","Data":"103cce38660441af4abd683446923ed0df08e38f5d1def760d967f40374cf24d"} Dec 12 04:46:52 crc kubenswrapper[4796]: I1212 04:46:52.533050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" event={"ID":"9d65581e-d568-49dc-9be0-4e4f06ce52e4","Type":"ContainerStarted","Data":"90806a6886c383422958b877d10897a536ef4b0b73d2be65a57b54e46160b247"} Dec 12 04:46:56 crc kubenswrapper[4796]: I1212 04:46:56.002983 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:56 crc kubenswrapper[4796]: I1212 04:46:56.078473 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:57 crc kubenswrapper[4796]: I1212 04:46:57.972871 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nbx9l"] Dec 12 04:46:57 crc kubenswrapper[4796]: I1212 04:46:57.973094 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nbx9l" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="registry-server" containerID="cri-o://8b2b5e440f8f8084300b0132edeebfb7af6d1f4be96e967b07f084ec1d2a1249" gracePeriod=2 Dec 12 04:46:58 crc kubenswrapper[4796]: I1212 04:46:58.595974 4796 generic.go:334] "Generic (PLEG): container finished" podID="290aeef4-2d46-418e-8752-858942ad09dd" containerID="8b2b5e440f8f8084300b0132edeebfb7af6d1f4be96e967b07f084ec1d2a1249" exitCode=0 Dec 12 04:46:58 crc kubenswrapper[4796]: I1212 04:46:58.596204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerDied","Data":"8b2b5e440f8f8084300b0132edeebfb7af6d1f4be96e967b07f084ec1d2a1249"} Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.201798 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.246577 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6r48\" (UniqueName: \"kubernetes.io/projected/290aeef4-2d46-418e-8752-858942ad09dd-kube-api-access-j6r48\") pod \"290aeef4-2d46-418e-8752-858942ad09dd\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.246697 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-catalog-content\") pod \"290aeef4-2d46-418e-8752-858942ad09dd\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.246750 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-utilities\") pod \"290aeef4-2d46-418e-8752-858942ad09dd\" (UID: \"290aeef4-2d46-418e-8752-858942ad09dd\") " Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.253892 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-utilities" (OuterVolumeSpecName: "utilities") pod "290aeef4-2d46-418e-8752-858942ad09dd" (UID: "290aeef4-2d46-418e-8752-858942ad09dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.266869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290aeef4-2d46-418e-8752-858942ad09dd-kube-api-access-j6r48" (OuterVolumeSpecName: "kube-api-access-j6r48") pod "290aeef4-2d46-418e-8752-858942ad09dd" (UID: "290aeef4-2d46-418e-8752-858942ad09dd"). InnerVolumeSpecName "kube-api-access-j6r48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.352836 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.353053 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6r48\" (UniqueName: \"kubernetes.io/projected/290aeef4-2d46-418e-8752-858942ad09dd-kube-api-access-j6r48\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.420959 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "290aeef4-2d46-418e-8752-858942ad09dd" (UID: "290aeef4-2d46-418e-8752-858942ad09dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.454100 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290aeef4-2d46-418e-8752-858942ad09dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.606596 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbx9l" event={"ID":"290aeef4-2d46-418e-8752-858942ad09dd","Type":"ContainerDied","Data":"e2b9daa6d018ac4e948f977efd1dddb9e5ae4843d01a5393d4d5b2f8a1bf53a1"} Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.606881 4796 scope.go:117] "RemoveContainer" containerID="8b2b5e440f8f8084300b0132edeebfb7af6d1f4be96e967b07f084ec1d2a1249" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.606760 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbx9l" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.615132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" event={"ID":"db1474b8-5eda-4d9e-8364-21082cc5d214","Type":"ContainerStarted","Data":"e802acfbcf61583eec1b152900e76b0a83a2f7956f5a1123f179115bfbb3bd8d"} Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.615772 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.622067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" event={"ID":"9d65581e-d568-49dc-9be0-4e4f06ce52e4","Type":"ContainerStarted","Data":"1e7c7a51d844abc312085ef6e3e90b127f405e8571980e32eb5237b0e3b6263a"} Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.622727 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.631945 4796 scope.go:117] "RemoveContainer" containerID="ca0c5bf04df6fb01db0a9821e9ed189790babf3cb56fc9d18cefb973683b67fe" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.634638 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nbx9l"] Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.654780 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nbx9l"] Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.663340 4796 scope.go:117] "RemoveContainer" containerID="bb005fef34ac248813fbdf922f7a4f1f35573028ee7525c204c388e19d358ed9" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.708861 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" podStartSLOduration=2.378590692 podStartE2EDuration="9.70883626s" podCreationTimestamp="2025-12-12 04:46:50 +0000 UTC" firstStartedPulling="2025-12-12 04:46:51.629397903 +0000 UTC m=+802.505415050" lastFinishedPulling="2025-12-12 04:46:58.959643471 +0000 UTC m=+809.835660618" observedRunningTime="2025-12-12 04:46:59.704137099 +0000 UTC m=+810.580154246" watchObservedRunningTime="2025-12-12 04:46:59.70883626 +0000 UTC m=+810.584853407" Dec 12 04:46:59 crc kubenswrapper[4796]: I1212 04:46:59.709397 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" podStartSLOduration=2.543550467 podStartE2EDuration="9.709389888s" podCreationTimestamp="2025-12-12 04:46:50 +0000 UTC" firstStartedPulling="2025-12-12 04:46:51.811464589 +0000 UTC m=+802.687481736" lastFinishedPulling="2025-12-12 04:46:58.97730401 +0000 UTC m=+809.853321157" observedRunningTime="2025-12-12 04:46:59.676892129 +0000 UTC m=+810.552909286" watchObservedRunningTime="2025-12-12 04:46:59.709389888 +0000 UTC m=+810.585407045" Dec 12 04:47:01 crc kubenswrapper[4796]: I1212 04:47:01.418386 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="290aeef4-2d46-418e-8752-858942ad09dd" path="/var/lib/kubelet/pods/290aeef4-2d46-418e-8752-858942ad09dd/volumes" Dec 12 04:47:02 crc kubenswrapper[4796]: I1212 04:47:02.969864 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:47:02 crc kubenswrapper[4796]: I1212 04:47:02.970426 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:47:11 crc kubenswrapper[4796]: I1212 04:47:11.347750 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c7ffbcf95-nsmtv" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.007832 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-54b76c8dd-989lk" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.764237 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2"] Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.764563 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="extract-utilities" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.764584 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="extract-utilities" Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.764600 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="extract-content" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.764609 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="extract-content" Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.764629 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="registry-server" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.764638 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="registry-server" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.764764 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="290aeef4-2d46-418e-8752-858942ad09dd" containerName="registry-server" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.765310 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.767591 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-s6n9k"] Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.768569 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n5t2c" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.769692 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.778805 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.783937 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.784135 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.802412 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2"] Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845554 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-frr-conf\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845593 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khbn\" (UniqueName: \"kubernetes.io/projected/a0877598-c08d-4323-9919-775d9d1f789d-kube-api-access-7khbn\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845617 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-frr-sockets\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-reloader\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845657 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfpp\" (UniqueName: \"kubernetes.io/projected/7585dd28-35f6-4a54-b39c-9bdbecf98c13-kube-api-access-jmfpp\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845674 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a0877598-c08d-4323-9919-775d9d1f789d-frr-startup\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845691 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0877598-c08d-4323-9919-775d9d1f789d-metrics-certs\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845704 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7585dd28-35f6-4a54-b39c-9bdbecf98c13-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.845725 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-metrics\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.868637 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nvw2w"] Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.874495 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nvw2w" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.876659 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.876952 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.877244 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.877392 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vqk7b" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.884817 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-vm69t"] Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.885687 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.887164 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.913168 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-vm69t"] Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7khbn\" (UniqueName: \"kubernetes.io/projected/a0877598-c08d-4323-9919-775d9d1f789d-kube-api-access-7khbn\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946642 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-frr-sockets\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-cert\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-reloader\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946717 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfpp\" (UniqueName: \"kubernetes.io/projected/7585dd28-35f6-4a54-b39c-9bdbecf98c13-kube-api-access-jmfpp\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946915 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a0877598-c08d-4323-9919-775d9d1f789d-frr-startup\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.946983 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7585dd28-35f6-4a54-b39c-9bdbecf98c13-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0877598-c08d-4323-9919-775d9d1f789d-metrics-certs\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947050 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rll8w\" (UniqueName: \"kubernetes.io/projected/dfbd0368-d699-416a-bf10-c6e5a6716c1a-kube-api-access-rll8w\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947087 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/379b9266-5fd0-4c73-8d8e-376e85112dbd-metallb-excludel2\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.947096 4796 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-metrics\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.947137 4796 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.947152 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7585dd28-35f6-4a54-b39c-9bdbecf98c13-cert podName:7585dd28-35f6-4a54-b39c-9bdbecf98c13 nodeName:}" failed. No retries permitted until 2025-12-12 04:47:32.44713104 +0000 UTC m=+843.323148187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7585dd28-35f6-4a54-b39c-9bdbecf98c13-cert") pod "frr-k8s-webhook-server-7784b6fcf-fpns2" (UID: "7585dd28-35f6-4a54-b39c-9bdbecf98c13") : secret "frr-k8s-webhook-server-cert" not found Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947229 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-metrics-certs\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:31 crc kubenswrapper[4796]: E1212 04:47:31.947259 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0877598-c08d-4323-9919-775d9d1f789d-metrics-certs podName:a0877598-c08d-4323-9919-775d9d1f789d nodeName:}" failed. No retries permitted until 2025-12-12 04:47:32.447239803 +0000 UTC m=+843.323256950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0877598-c08d-4323-9919-775d9d1f789d-metrics-certs") pod "frr-k8s-s6n9k" (UID: "a0877598-c08d-4323-9919-775d9d1f789d") : secret "frr-k8s-certs-secret" not found Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-metrics-certs\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947396 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-frr-sockets\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-frr-conf\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947433 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-metrics\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947450 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xrx\" (UniqueName: \"kubernetes.io/projected/379b9266-5fd0-4c73-8d8e-376e85112dbd-kube-api-access-74xrx\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-frr-conf\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947681 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a0877598-c08d-4323-9919-775d9d1f789d-reloader\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.947928 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a0877598-c08d-4323-9919-775d9d1f789d-frr-startup\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.965446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7khbn\" (UniqueName: \"kubernetes.io/projected/a0877598-c08d-4323-9919-775d9d1f789d-kube-api-access-7khbn\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:31 crc kubenswrapper[4796]: I1212 04:47:31.978858 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfpp\" (UniqueName: \"kubernetes.io/projected/7585dd28-35f6-4a54-b39c-9bdbecf98c13-kube-api-access-jmfpp\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-cert\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rll8w\" (UniqueName: \"kubernetes.io/projected/dfbd0368-d699-416a-bf10-c6e5a6716c1a-kube-api-access-rll8w\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/379b9266-5fd0-4c73-8d8e-376e85112dbd-metallb-excludel2\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-metrics-certs\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049257 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-metrics-certs\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.049313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74xrx\" (UniqueName: \"kubernetes.io/projected/379b9266-5fd0-4c73-8d8e-376e85112dbd-kube-api-access-74xrx\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: E1212 04:47:32.049448 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 12 04:47:32 crc kubenswrapper[4796]: E1212 04:47:32.049506 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist podName:379b9266-5fd0-4c73-8d8e-376e85112dbd nodeName:}" failed. No retries permitted until 2025-12-12 04:47:32.549487233 +0000 UTC m=+843.425504430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist") pod "speaker-nvw2w" (UID: "379b9266-5fd0-4c73-8d8e-376e85112dbd") : secret "metallb-memberlist" not found Dec 12 04:47:32 crc kubenswrapper[4796]: E1212 04:47:32.049628 4796 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 12 04:47:32 crc kubenswrapper[4796]: E1212 04:47:32.049654 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-metrics-certs podName:dfbd0368-d699-416a-bf10-c6e5a6716c1a nodeName:}" failed. No retries permitted until 2025-12-12 04:47:32.549645998 +0000 UTC m=+843.425663255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-metrics-certs") pod "controller-5bddd4b946-vm69t" (UID: "dfbd0368-d699-416a-bf10-c6e5a6716c1a") : secret "controller-certs-secret" not found Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.050167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/379b9266-5fd0-4c73-8d8e-376e85112dbd-metallb-excludel2\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.056502 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.056709 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-metrics-certs\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.062108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-cert\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.068033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74xrx\" (UniqueName: \"kubernetes.io/projected/379b9266-5fd0-4c73-8d8e-376e85112dbd-kube-api-access-74xrx\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.069394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rll8w\" (UniqueName: \"kubernetes.io/projected/dfbd0368-d699-416a-bf10-c6e5a6716c1a-kube-api-access-rll8w\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.476892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0877598-c08d-4323-9919-775d9d1f789d-metrics-certs\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.476935 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7585dd28-35f6-4a54-b39c-9bdbecf98c13-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.480248 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7585dd28-35f6-4a54-b39c-9bdbecf98c13-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-fpns2\" (UID: \"7585dd28-35f6-4a54-b39c-9bdbecf98c13\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.499108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0877598-c08d-4323-9919-775d9d1f789d-metrics-certs\") pod \"frr-k8s-s6n9k\" (UID: \"a0877598-c08d-4323-9919-775d9d1f789d\") " pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.578781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.578900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-metrics-certs\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: E1212 04:47:32.578919 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 12 04:47:32 crc kubenswrapper[4796]: E1212 04:47:32.578988 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist podName:379b9266-5fd0-4c73-8d8e-376e85112dbd nodeName:}" failed. No retries permitted until 2025-12-12 04:47:33.578971212 +0000 UTC m=+844.454988349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist") pod "speaker-nvw2w" (UID: "379b9266-5fd0-4c73-8d8e-376e85112dbd") : secret "metallb-memberlist" not found Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.582522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfbd0368-d699-416a-bf10-c6e5a6716c1a-metrics-certs\") pod \"controller-5bddd4b946-vm69t\" (UID: \"dfbd0368-d699-416a-bf10-c6e5a6716c1a\") " pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.701195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.715185 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.801341 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.977659 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.977904 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.977951 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.978443 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"276b61fb2fa37553e2279ac84eab51942aa3dddc3e5b7b40311531ace1182b7d"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:47:32 crc kubenswrapper[4796]: I1212 04:47:32.978487 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://276b61fb2fa37553e2279ac84eab51942aa3dddc3e5b7b40311531ace1182b7d" gracePeriod=600 Dec 12 04:47:33 crc kubenswrapper[4796]: W1212 04:47:33.033775 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7585dd28_35f6_4a54_b39c_9bdbecf98c13.slice/crio-b1d380ef868388b723c26ddb37f03b6a5972c627560654cab6a59c7f421649f3 WatchSource:0}: Error finding container b1d380ef868388b723c26ddb37f03b6a5972c627560654cab6a59c7f421649f3: Status 404 returned error can't find the container with id b1d380ef868388b723c26ddb37f03b6a5972c627560654cab6a59c7f421649f3 Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.034931 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2"] Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.099797 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-vm69t"] Dec 12 04:47:33 crc kubenswrapper[4796]: W1212 04:47:33.107840 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbd0368_d699_416a_bf10_c6e5a6716c1a.slice/crio-96e9ffedd81c1bb91a2eea394f6f5f3ae3845a5d624818734a831df53d5fa35c WatchSource:0}: Error finding container 96e9ffedd81c1bb91a2eea394f6f5f3ae3845a5d624818734a831df53d5fa35c: Status 404 returned error can't find the container with id 96e9ffedd81c1bb91a2eea394f6f5f3ae3845a5d624818734a831df53d5fa35c Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.599562 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.607736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/379b9266-5fd0-4c73-8d8e-376e85112dbd-memberlist\") pod \"speaker-nvw2w\" (UID: \"379b9266-5fd0-4c73-8d8e-376e85112dbd\") " pod="metallb-system/speaker-nvw2w" Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.690509 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nvw2w" Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.827185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"d63af6af8c918d75d480b90d7faf722d201c37fcc87269b0ab1643e39431d6fd"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.839983 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-vm69t" event={"ID":"dfbd0368-d699-416a-bf10-c6e5a6716c1a","Type":"ContainerStarted","Data":"489a72491c22e20f9dfecb919939a828a11a684a14b4802d96ef0252f6bc0607"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.840035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-vm69t" event={"ID":"dfbd0368-d699-416a-bf10-c6e5a6716c1a","Type":"ContainerStarted","Data":"867fe9b1a51efb2e8eb978890b878b4b6f756a6d3bdfeb283e5da78ec2da616f"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.840054 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.840065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-vm69t" event={"ID":"dfbd0368-d699-416a-bf10-c6e5a6716c1a","Type":"ContainerStarted","Data":"96e9ffedd81c1bb91a2eea394f6f5f3ae3845a5d624818734a831df53d5fa35c"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.854294 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="276b61fb2fa37553e2279ac84eab51942aa3dddc3e5b7b40311531ace1182b7d" exitCode=0 Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.854372 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"276b61fb2fa37553e2279ac84eab51942aa3dddc3e5b7b40311531ace1182b7d"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.854398 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"1733a30215adfd71b24cb88a4cee9d965e3cb0a10cc8f3339202f4fa5f80086c"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.854414 4796 scope.go:117] "RemoveContainer" containerID="06fce13dec8dc4d862dfbf1daa8b85318efca122c15dc08cd96ce9e70aee14aa" Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.858423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nvw2w" event={"ID":"379b9266-5fd0-4c73-8d8e-376e85112dbd","Type":"ContainerStarted","Data":"8a7cd221453e93c404b24ba2d536d56b35a1b74f713a8519b9f635e3f2bccd7a"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.864589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" event={"ID":"7585dd28-35f6-4a54-b39c-9bdbecf98c13","Type":"ContainerStarted","Data":"b1d380ef868388b723c26ddb37f03b6a5972c627560654cab6a59c7f421649f3"} Dec 12 04:47:33 crc kubenswrapper[4796]: I1212 04:47:33.874181 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-vm69t" podStartSLOduration=2.874150696 podStartE2EDuration="2.874150696s" podCreationTimestamp="2025-12-12 04:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:47:33.871868434 +0000 UTC m=+844.747885581" watchObservedRunningTime="2025-12-12 04:47:33.874150696 +0000 UTC m=+844.750167833" Dec 12 04:47:34 crc kubenswrapper[4796]: I1212 04:47:34.883107 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nvw2w" event={"ID":"379b9266-5fd0-4c73-8d8e-376e85112dbd","Type":"ContainerStarted","Data":"38ce84b2f8f4512f3288ca36cccc13a36c96cf82373711e115e10083eb7e2d04"} Dec 12 04:47:34 crc kubenswrapper[4796]: I1212 04:47:34.883428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nvw2w" event={"ID":"379b9266-5fd0-4c73-8d8e-376e85112dbd","Type":"ContainerStarted","Data":"b82c07bb8b56486f49a415988b5493be876b0b1808699746144bfacc23dc1d08"} Dec 12 04:47:34 crc kubenswrapper[4796]: I1212 04:47:34.883442 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nvw2w" Dec 12 04:47:34 crc kubenswrapper[4796]: I1212 04:47:34.912768 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nvw2w" podStartSLOduration=3.9127440289999997 podStartE2EDuration="3.912744029s" podCreationTimestamp="2025-12-12 04:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:47:34.900388418 +0000 UTC m=+845.776405565" watchObservedRunningTime="2025-12-12 04:47:34.912744029 +0000 UTC m=+845.788761176" Dec 12 04:47:42 crc kubenswrapper[4796]: I1212 04:47:42.933057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" event={"ID":"7585dd28-35f6-4a54-b39c-9bdbecf98c13","Type":"ContainerStarted","Data":"c27f38695149c49318c8dcbbb6f840771946f33e78bfb441bb8823994404da37"} Dec 12 04:47:42 crc kubenswrapper[4796]: I1212 04:47:42.933721 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:42 crc kubenswrapper[4796]: I1212 04:47:42.935263 4796 generic.go:334] "Generic (PLEG): container finished" podID="a0877598-c08d-4323-9919-775d9d1f789d" containerID="6f3700de87234c6c457b028db9ca7350251fb3d9af0c0031c41b26919143a71c" exitCode=0 Dec 12 04:47:42 crc kubenswrapper[4796]: I1212 04:47:42.935330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerDied","Data":"6f3700de87234c6c457b028db9ca7350251fb3d9af0c0031c41b26919143a71c"} Dec 12 04:47:42 crc kubenswrapper[4796]: I1212 04:47:42.953690 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" podStartSLOduration=3.001378613 podStartE2EDuration="11.953669727s" podCreationTimestamp="2025-12-12 04:47:31 +0000 UTC" firstStartedPulling="2025-12-12 04:47:33.035998788 +0000 UTC m=+843.912015925" lastFinishedPulling="2025-12-12 04:47:41.988289882 +0000 UTC m=+852.864307039" observedRunningTime="2025-12-12 04:47:42.950227908 +0000 UTC m=+853.826245065" watchObservedRunningTime="2025-12-12 04:47:42.953669727 +0000 UTC m=+853.829686874" Dec 12 04:47:43 crc kubenswrapper[4796]: I1212 04:47:43.942217 4796 generic.go:334] "Generic (PLEG): container finished" podID="a0877598-c08d-4323-9919-775d9d1f789d" containerID="056b4395fa34ec91ac27808c93114c729ad84e833702d48418af0a721386b9de" exitCode=0 Dec 12 04:47:43 crc kubenswrapper[4796]: I1212 04:47:43.942347 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerDied","Data":"056b4395fa34ec91ac27808c93114c729ad84e833702d48418af0a721386b9de"} Dec 12 04:47:44 crc kubenswrapper[4796]: I1212 04:47:44.952214 4796 generic.go:334] "Generic (PLEG): container finished" podID="a0877598-c08d-4323-9919-775d9d1f789d" containerID="51c974bb3e9eb63fb83c965b6cfeaa085f18c6c2756145b57819c40685e0577c" exitCode=0 Dec 12 04:47:44 crc kubenswrapper[4796]: I1212 04:47:44.952317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerDied","Data":"51c974bb3e9eb63fb83c965b6cfeaa085f18c6c2756145b57819c40685e0577c"} Dec 12 04:47:45 crc kubenswrapper[4796]: I1212 04:47:45.960527 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"7bf5dceacd3ab537cecba8bd657f9b625c48567b48497fc7ce622bb99f71423f"} Dec 12 04:47:45 crc kubenswrapper[4796]: I1212 04:47:45.960854 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"ecd3ccfb02e260abf9a7e6cc859527307a377785f9588ccfb517eaa6a17a6036"} Dec 12 04:47:46 crc kubenswrapper[4796]: I1212 04:47:46.970882 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"13fe02237ffa23b0d8f2d8c89d3c3082cd003fec811705498234e2024b2e3746"} Dec 12 04:47:46 crc kubenswrapper[4796]: I1212 04:47:46.971148 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"cfa69f3ff753cb9ca515c6826f888e82bda100d96ea61e33a5a75698420b8777"} Dec 12 04:47:46 crc kubenswrapper[4796]: I1212 04:47:46.971157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"2bfefec0077fcd94c398aa180c526358bca2b134b7300e8702d4b45e0f3679e4"} Dec 12 04:47:46 crc kubenswrapper[4796]: I1212 04:47:46.971165 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s6n9k" event={"ID":"a0877598-c08d-4323-9919-775d9d1f789d","Type":"ContainerStarted","Data":"150c311680a035b0bd40be6c5d3227c697f336e222e27331a5a6539812743e67"} Dec 12 04:47:46 crc kubenswrapper[4796]: I1212 04:47:46.971176 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:47 crc kubenswrapper[4796]: I1212 04:47:47.000587 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-s6n9k" podStartSLOduration=7.000555844 podStartE2EDuration="16.000564399s" podCreationTimestamp="2025-12-12 04:47:31 +0000 UTC" firstStartedPulling="2025-12-12 04:47:32.955811219 +0000 UTC m=+843.831828376" lastFinishedPulling="2025-12-12 04:47:41.955819794 +0000 UTC m=+852.831836931" observedRunningTime="2025-12-12 04:47:46.997263234 +0000 UTC m=+857.873280411" watchObservedRunningTime="2025-12-12 04:47:47.000564399 +0000 UTC m=+857.876581556" Dec 12 04:47:47 crc kubenswrapper[4796]: I1212 04:47:47.716435 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:47 crc kubenswrapper[4796]: I1212 04:47:47.752383 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:47:52 crc kubenswrapper[4796]: I1212 04:47:52.708151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-fpns2" Dec 12 04:47:52 crc kubenswrapper[4796]: I1212 04:47:52.805766 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-vm69t" Dec 12 04:47:53 crc kubenswrapper[4796]: I1212 04:47:53.696167 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nvw2w" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.656526 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-prv4p"] Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.657680 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.689008 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qs2bk" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.691007 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.691010 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.704689 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-prv4p"] Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.733568 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92gcw\" (UniqueName: \"kubernetes.io/projected/9913be50-0fe3-420d-ae88-ecd3402a08c4-kube-api-access-92gcw\") pod \"openstack-operator-index-prv4p\" (UID: \"9913be50-0fe3-420d-ae88-ecd3402a08c4\") " pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.834626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92gcw\" (UniqueName: \"kubernetes.io/projected/9913be50-0fe3-420d-ae88-ecd3402a08c4-kube-api-access-92gcw\") pod \"openstack-operator-index-prv4p\" (UID: \"9913be50-0fe3-420d-ae88-ecd3402a08c4\") " pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.867378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92gcw\" (UniqueName: \"kubernetes.io/projected/9913be50-0fe3-420d-ae88-ecd3402a08c4-kube-api-access-92gcw\") pod \"openstack-operator-index-prv4p\" (UID: \"9913be50-0fe3-420d-ae88-ecd3402a08c4\") " pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:47:56 crc kubenswrapper[4796]: I1212 04:47:56.988107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:47:57 crc kubenswrapper[4796]: I1212 04:47:57.351692 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-prv4p"] Dec 12 04:47:58 crc kubenswrapper[4796]: I1212 04:47:58.045184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-prv4p" event={"ID":"9913be50-0fe3-420d-ae88-ecd3402a08c4","Type":"ContainerStarted","Data":"12d4f2a016b0e5a9d21ac34e3bd50cc00fc2d9018f3bdda2581cff5472e9a3d6"} Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.019123 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-prv4p"] Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.623516 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8jkpm"] Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.624752 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.636254 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8jkpm"] Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.743785 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl4kp\" (UniqueName: \"kubernetes.io/projected/e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31-kube-api-access-jl4kp\") pod \"openstack-operator-index-8jkpm\" (UID: \"e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31\") " pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.844638 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl4kp\" (UniqueName: \"kubernetes.io/projected/e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31-kube-api-access-jl4kp\") pod \"openstack-operator-index-8jkpm\" (UID: \"e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31\") " pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.862684 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl4kp\" (UniqueName: \"kubernetes.io/projected/e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31-kube-api-access-jl4kp\") pod \"openstack-operator-index-8jkpm\" (UID: \"e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31\") " pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:00 crc kubenswrapper[4796]: I1212 04:48:00.938813 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:01 crc kubenswrapper[4796]: I1212 04:48:01.102866 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-prv4p" event={"ID":"9913be50-0fe3-420d-ae88-ecd3402a08c4","Type":"ContainerStarted","Data":"628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69"} Dec 12 04:48:01 crc kubenswrapper[4796]: I1212 04:48:01.103262 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-prv4p" podUID="9913be50-0fe3-420d-ae88-ecd3402a08c4" containerName="registry-server" containerID="cri-o://628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69" gracePeriod=2 Dec 12 04:48:01 crc kubenswrapper[4796]: I1212 04:48:01.180353 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-prv4p" podStartSLOduration=2.164892268 podStartE2EDuration="5.180333432s" podCreationTimestamp="2025-12-12 04:47:56 +0000 UTC" firstStartedPulling="2025-12-12 04:47:57.353119174 +0000 UTC m=+868.229136321" lastFinishedPulling="2025-12-12 04:48:00.368560318 +0000 UTC m=+871.244577485" observedRunningTime="2025-12-12 04:48:01.174368543 +0000 UTC m=+872.050385700" watchObservedRunningTime="2025-12-12 04:48:01.180333432 +0000 UTC m=+872.056350579" Dec 12 04:48:01 crc kubenswrapper[4796]: I1212 04:48:01.448467 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8jkpm"] Dec 12 04:48:01 crc kubenswrapper[4796]: I1212 04:48:01.987580 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.061804 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92gcw\" (UniqueName: \"kubernetes.io/projected/9913be50-0fe3-420d-ae88-ecd3402a08c4-kube-api-access-92gcw\") pod \"9913be50-0fe3-420d-ae88-ecd3402a08c4\" (UID: \"9913be50-0fe3-420d-ae88-ecd3402a08c4\") " Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.068449 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9913be50-0fe3-420d-ae88-ecd3402a08c4-kube-api-access-92gcw" (OuterVolumeSpecName: "kube-api-access-92gcw") pod "9913be50-0fe3-420d-ae88-ecd3402a08c4" (UID: "9913be50-0fe3-420d-ae88-ecd3402a08c4"). InnerVolumeSpecName "kube-api-access-92gcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.109922 4796 generic.go:334] "Generic (PLEG): container finished" podID="9913be50-0fe3-420d-ae88-ecd3402a08c4" containerID="628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69" exitCode=0 Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.110033 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-prv4p" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.111172 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-prv4p" event={"ID":"9913be50-0fe3-420d-ae88-ecd3402a08c4","Type":"ContainerDied","Data":"628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69"} Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.111215 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-prv4p" event={"ID":"9913be50-0fe3-420d-ae88-ecd3402a08c4","Type":"ContainerDied","Data":"12d4f2a016b0e5a9d21ac34e3bd50cc00fc2d9018f3bdda2581cff5472e9a3d6"} Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.111233 4796 scope.go:117] "RemoveContainer" containerID="628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.113737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8jkpm" event={"ID":"e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31","Type":"ContainerStarted","Data":"e62d189c51265253ba8c8d35c30ecd93de1c20510dd333a769dc65672a865775"} Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.113757 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8jkpm" event={"ID":"e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31","Type":"ContainerStarted","Data":"8e10e186a3e794f376c16b5aedbe764852104551a22e1e83b0355b813ce3a607"} Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.134323 4796 scope.go:117] "RemoveContainer" containerID="628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69" Dec 12 04:48:02 crc kubenswrapper[4796]: E1212 04:48:02.134766 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69\": container with ID starting with 628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69 not found: ID does not exist" containerID="628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.134804 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69"} err="failed to get container status \"628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69\": rpc error: code = NotFound desc = could not find container \"628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69\": container with ID starting with 628560adb669dc49647efd09a70597a81812e603c2fcfe5acdc6c3a7918e5f69 not found: ID does not exist" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.141331 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8jkpm" podStartSLOduration=2.093975089 podStartE2EDuration="2.141275066s" podCreationTimestamp="2025-12-12 04:48:00 +0000 UTC" firstStartedPulling="2025-12-12 04:48:01.445654979 +0000 UTC m=+872.321672136" lastFinishedPulling="2025-12-12 04:48:01.492954966 +0000 UTC m=+872.368972113" observedRunningTime="2025-12-12 04:48:02.13632024 +0000 UTC m=+873.012337387" watchObservedRunningTime="2025-12-12 04:48:02.141275066 +0000 UTC m=+873.017292213" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.159628 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-prv4p"] Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.162852 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92gcw\" (UniqueName: \"kubernetes.io/projected/9913be50-0fe3-420d-ae88-ecd3402a08c4-kube-api-access-92gcw\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.163629 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-prv4p"] Dec 12 04:48:02 crc kubenswrapper[4796]: I1212 04:48:02.722931 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-s6n9k" Dec 12 04:48:03 crc kubenswrapper[4796]: I1212 04:48:03.421526 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9913be50-0fe3-420d-ae88-ecd3402a08c4" path="/var/lib/kubelet/pods/9913be50-0fe3-420d-ae88-ecd3402a08c4/volumes" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.427390 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47pq5"] Dec 12 04:48:05 crc kubenswrapper[4796]: E1212 04:48:05.427670 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9913be50-0fe3-420d-ae88-ecd3402a08c4" containerName="registry-server" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.427687 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9913be50-0fe3-420d-ae88-ecd3402a08c4" containerName="registry-server" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.427850 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9913be50-0fe3-420d-ae88-ecd3402a08c4" containerName="registry-server" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.428780 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.449657 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47pq5"] Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.506271 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-catalog-content\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.506351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwn5m\" (UniqueName: \"kubernetes.io/projected/059e0d1e-00e9-444f-8343-5991d61e7e9e-kube-api-access-jwn5m\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.506392 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-utilities\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.607717 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-catalog-content\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.607777 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwn5m\" (UniqueName: \"kubernetes.io/projected/059e0d1e-00e9-444f-8343-5991d61e7e9e-kube-api-access-jwn5m\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.607811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-utilities\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.608452 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-utilities\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.608811 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-catalog-content\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.648351 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwn5m\" (UniqueName: \"kubernetes.io/projected/059e0d1e-00e9-444f-8343-5991d61e7e9e-kube-api-access-jwn5m\") pod \"community-operators-47pq5\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:05 crc kubenswrapper[4796]: I1212 04:48:05.755381 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:06 crc kubenswrapper[4796]: I1212 04:48:06.037489 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47pq5"] Dec 12 04:48:06 crc kubenswrapper[4796]: W1212 04:48:06.054441 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059e0d1e_00e9_444f_8343_5991d61e7e9e.slice/crio-8544cc25db0cf66f99a8c4825f13a25a13a15a2606958e0bf07bc226f8599b99 WatchSource:0}: Error finding container 8544cc25db0cf66f99a8c4825f13a25a13a15a2606958e0bf07bc226f8599b99: Status 404 returned error can't find the container with id 8544cc25db0cf66f99a8c4825f13a25a13a15a2606958e0bf07bc226f8599b99 Dec 12 04:48:06 crc kubenswrapper[4796]: I1212 04:48:06.141163 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerStarted","Data":"8544cc25db0cf66f99a8c4825f13a25a13a15a2606958e0bf07bc226f8599b99"} Dec 12 04:48:07 crc kubenswrapper[4796]: I1212 04:48:07.150240 4796 generic.go:334] "Generic (PLEG): container finished" podID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerID="38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9" exitCode=0 Dec 12 04:48:07 crc kubenswrapper[4796]: I1212 04:48:07.150300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerDied","Data":"38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9"} Dec 12 04:48:08 crc kubenswrapper[4796]: I1212 04:48:08.156940 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerStarted","Data":"9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868"} Dec 12 04:48:09 crc kubenswrapper[4796]: I1212 04:48:09.163096 4796 generic.go:334] "Generic (PLEG): container finished" podID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerID="9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868" exitCode=0 Dec 12 04:48:09 crc kubenswrapper[4796]: I1212 04:48:09.163182 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerDied","Data":"9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868"} Dec 12 04:48:10 crc kubenswrapper[4796]: I1212 04:48:10.170981 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerStarted","Data":"2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30"} Dec 12 04:48:10 crc kubenswrapper[4796]: I1212 04:48:10.194130 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47pq5" podStartSLOduration=2.630804518 podStartE2EDuration="5.194110911s" podCreationTimestamp="2025-12-12 04:48:05 +0000 UTC" firstStartedPulling="2025-12-12 04:48:07.152071915 +0000 UTC m=+878.028089062" lastFinishedPulling="2025-12-12 04:48:09.715378308 +0000 UTC m=+880.591395455" observedRunningTime="2025-12-12 04:48:10.185844599 +0000 UTC m=+881.061861766" watchObservedRunningTime="2025-12-12 04:48:10.194110911 +0000 UTC m=+881.070128058" Dec 12 04:48:10 crc kubenswrapper[4796]: I1212 04:48:10.939238 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:10 crc kubenswrapper[4796]: I1212 04:48:10.939539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:10 crc kubenswrapper[4796]: I1212 04:48:10.988666 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:11 crc kubenswrapper[4796]: I1212 04:48:11.206016 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8jkpm" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.251426 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22"] Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.264865 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.269727 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wr9tr" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.283016 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22"] Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.390536 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrdp\" (UniqueName: \"kubernetes.io/projected/ef694040-71d2-464d-b70b-15b0ab44a2d8-kube-api-access-jqrdp\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.390769 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-util\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.390829 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-bundle\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.491838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrdp\" (UniqueName: \"kubernetes.io/projected/ef694040-71d2-464d-b70b-15b0ab44a2d8-kube-api-access-jqrdp\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.491936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-util\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.491967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-bundle\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.492428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-util\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.492509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-bundle\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.518194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrdp\" (UniqueName: \"kubernetes.io/projected/ef694040-71d2-464d-b70b-15b0ab44a2d8-kube-api-access-jqrdp\") pod \"a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:13 crc kubenswrapper[4796]: I1212 04:48:13.591859 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.027881 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8ncl4"] Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.029299 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.045044 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ncl4"] Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.097787 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rlk9\" (UniqueName: \"kubernetes.io/projected/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-kube-api-access-6rlk9\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.097950 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-catalog-content\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.098005 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-utilities\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.199398 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-catalog-content\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.199465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-utilities\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.199543 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rlk9\" (UniqueName: \"kubernetes.io/projected/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-kube-api-access-6rlk9\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.200124 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-catalog-content\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.200124 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-utilities\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.226482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rlk9\" (UniqueName: \"kubernetes.io/projected/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-kube-api-access-6rlk9\") pod \"redhat-marketplace-8ncl4\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.227537 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22"] Dec 12 04:48:14 crc kubenswrapper[4796]: W1212 04:48:14.257472 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef694040_71d2_464d_b70b_15b0ab44a2d8.slice/crio-cf87f6c3db72225195267edd936ee5ef88c9022cf46486dd75a49d3773f52277 WatchSource:0}: Error finding container cf87f6c3db72225195267edd936ee5ef88c9022cf46486dd75a49d3773f52277: Status 404 returned error can't find the container with id cf87f6c3db72225195267edd936ee5ef88c9022cf46486dd75a49d3773f52277 Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.353545 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:14 crc kubenswrapper[4796]: I1212 04:48:14.624018 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ncl4"] Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.197902 4796 generic.go:334] "Generic (PLEG): container finished" podID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerID="81f7eac07f2924159e7648791849fa53ed90e70fa5d00d9db322331669352804" exitCode=0 Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.197953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" event={"ID":"ef694040-71d2-464d-b70b-15b0ab44a2d8","Type":"ContainerDied","Data":"81f7eac07f2924159e7648791849fa53ed90e70fa5d00d9db322331669352804"} Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.198004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" event={"ID":"ef694040-71d2-464d-b70b-15b0ab44a2d8","Type":"ContainerStarted","Data":"cf87f6c3db72225195267edd936ee5ef88c9022cf46486dd75a49d3773f52277"} Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.200123 4796 generic.go:334] "Generic (PLEG): container finished" podID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerID="c009db87efd1b0ee9e0ee7ff1dabcb098f244a9abaadb2c757f50c14ba2075d2" exitCode=0 Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.200155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerDied","Data":"c009db87efd1b0ee9e0ee7ff1dabcb098f244a9abaadb2c757f50c14ba2075d2"} Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.200185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerStarted","Data":"c636076a36c445237aa34cfeaeb9105c3ac679906b6f48d614c0dd9b6c2a490c"} Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.756392 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.757497 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:15 crc kubenswrapper[4796]: I1212 04:48:15.798128 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:16 crc kubenswrapper[4796]: I1212 04:48:16.209528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerStarted","Data":"1aa3fef9f20910d445519a5bc2c6590191881ab7350c07cdd2e3a9879a19fa20"} Dec 12 04:48:16 crc kubenswrapper[4796]: I1212 04:48:16.215143 4796 generic.go:334] "Generic (PLEG): container finished" podID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerID="5505dec2a116af92dcb2e4197d98e43bad5a5c8ccb7b9e1e737706b3e326a752" exitCode=0 Dec 12 04:48:16 crc kubenswrapper[4796]: I1212 04:48:16.215264 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" event={"ID":"ef694040-71d2-464d-b70b-15b0ab44a2d8","Type":"ContainerDied","Data":"5505dec2a116af92dcb2e4197d98e43bad5a5c8ccb7b9e1e737706b3e326a752"} Dec 12 04:48:16 crc kubenswrapper[4796]: I1212 04:48:16.261820 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:17 crc kubenswrapper[4796]: I1212 04:48:17.221415 4796 generic.go:334] "Generic (PLEG): container finished" podID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerID="1aa3fef9f20910d445519a5bc2c6590191881ab7350c07cdd2e3a9879a19fa20" exitCode=0 Dec 12 04:48:17 crc kubenswrapper[4796]: I1212 04:48:17.221519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerDied","Data":"1aa3fef9f20910d445519a5bc2c6590191881ab7350c07cdd2e3a9879a19fa20"} Dec 12 04:48:17 crc kubenswrapper[4796]: I1212 04:48:17.230703 4796 generic.go:334] "Generic (PLEG): container finished" podID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerID="d7ede6e5b748b8abf3454b6225c442502c4ce09fa02ab2466a395bb6ca27a636" exitCode=0 Dec 12 04:48:17 crc kubenswrapper[4796]: I1212 04:48:17.230824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" event={"ID":"ef694040-71d2-464d-b70b-15b0ab44a2d8","Type":"ContainerDied","Data":"d7ede6e5b748b8abf3454b6225c442502c4ce09fa02ab2466a395bb6ca27a636"} Dec 12 04:48:17 crc kubenswrapper[4796]: I1212 04:48:17.421865 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47pq5"] Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.258786 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerStarted","Data":"ec6ec3445a7b3dd8eb9a3d73d7e27f0f56351a9e46fa27fed649c28c8df721d6"} Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.292971 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8ncl4" podStartSLOduration=1.821602039 podStartE2EDuration="4.292949911s" podCreationTimestamp="2025-12-12 04:48:14 +0000 UTC" firstStartedPulling="2025-12-12 04:48:15.201659627 +0000 UTC m=+886.077676774" lastFinishedPulling="2025-12-12 04:48:17.673007499 +0000 UTC m=+888.549024646" observedRunningTime="2025-12-12 04:48:18.288504711 +0000 UTC m=+889.164521868" watchObservedRunningTime="2025-12-12 04:48:18.292949911 +0000 UTC m=+889.168967058" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.656658 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.768255 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrdp\" (UniqueName: \"kubernetes.io/projected/ef694040-71d2-464d-b70b-15b0ab44a2d8-kube-api-access-jqrdp\") pod \"ef694040-71d2-464d-b70b-15b0ab44a2d8\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.768355 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-bundle\") pod \"ef694040-71d2-464d-b70b-15b0ab44a2d8\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.768485 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-util\") pod \"ef694040-71d2-464d-b70b-15b0ab44a2d8\" (UID: \"ef694040-71d2-464d-b70b-15b0ab44a2d8\") " Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.769167 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-bundle" (OuterVolumeSpecName: "bundle") pod "ef694040-71d2-464d-b70b-15b0ab44a2d8" (UID: "ef694040-71d2-464d-b70b-15b0ab44a2d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.777419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef694040-71d2-464d-b70b-15b0ab44a2d8-kube-api-access-jqrdp" (OuterVolumeSpecName: "kube-api-access-jqrdp") pod "ef694040-71d2-464d-b70b-15b0ab44a2d8" (UID: "ef694040-71d2-464d-b70b-15b0ab44a2d8"). InnerVolumeSpecName "kube-api-access-jqrdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.782619 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-util" (OuterVolumeSpecName: "util") pod "ef694040-71d2-464d-b70b-15b0ab44a2d8" (UID: "ef694040-71d2-464d-b70b-15b0ab44a2d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.870093 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.870131 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef694040-71d2-464d-b70b-15b0ab44a2d8-util\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:18 crc kubenswrapper[4796]: I1212 04:48:18.870143 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqrdp\" (UniqueName: \"kubernetes.io/projected/ef694040-71d2-464d-b70b-15b0ab44a2d8-kube-api-access-jqrdp\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:19 crc kubenswrapper[4796]: I1212 04:48:19.266057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" event={"ID":"ef694040-71d2-464d-b70b-15b0ab44a2d8","Type":"ContainerDied","Data":"cf87f6c3db72225195267edd936ee5ef88c9022cf46486dd75a49d3773f52277"} Dec 12 04:48:19 crc kubenswrapper[4796]: I1212 04:48:19.266272 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf87f6c3db72225195267edd936ee5ef88c9022cf46486dd75a49d3773f52277" Dec 12 04:48:19 crc kubenswrapper[4796]: I1212 04:48:19.266088 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22" Dec 12 04:48:19 crc kubenswrapper[4796]: I1212 04:48:19.266384 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47pq5" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="registry-server" containerID="cri-o://2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30" gracePeriod=2 Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.851985 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.902936 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-catalog-content\") pod \"059e0d1e-00e9-444f-8343-5991d61e7e9e\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.903040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwn5m\" (UniqueName: \"kubernetes.io/projected/059e0d1e-00e9-444f-8343-5991d61e7e9e-kube-api-access-jwn5m\") pod \"059e0d1e-00e9-444f-8343-5991d61e7e9e\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.903075 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-utilities\") pod \"059e0d1e-00e9-444f-8343-5991d61e7e9e\" (UID: \"059e0d1e-00e9-444f-8343-5991d61e7e9e\") " Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.904212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-utilities" (OuterVolumeSpecName: "utilities") pod "059e0d1e-00e9-444f-8343-5991d61e7e9e" (UID: "059e0d1e-00e9-444f-8343-5991d61e7e9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.908693 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059e0d1e-00e9-444f-8343-5991d61e7e9e-kube-api-access-jwn5m" (OuterVolumeSpecName: "kube-api-access-jwn5m") pod "059e0d1e-00e9-444f-8343-5991d61e7e9e" (UID: "059e0d1e-00e9-444f-8343-5991d61e7e9e"). InnerVolumeSpecName "kube-api-access-jwn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:48:20 crc kubenswrapper[4796]: I1212 04:48:20.972859 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "059e0d1e-00e9-444f-8343-5991d61e7e9e" (UID: "059e0d1e-00e9-444f-8343-5991d61e7e9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.004649 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwn5m\" (UniqueName: \"kubernetes.io/projected/059e0d1e-00e9-444f-8343-5991d61e7e9e-kube-api-access-jwn5m\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.004690 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.004700 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059e0d1e-00e9-444f-8343-5991d61e7e9e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.277667 4796 generic.go:334] "Generic (PLEG): container finished" podID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerID="2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30" exitCode=0 Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.277713 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerDied","Data":"2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30"} Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.277724 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47pq5" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.277742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47pq5" event={"ID":"059e0d1e-00e9-444f-8343-5991d61e7e9e","Type":"ContainerDied","Data":"8544cc25db0cf66f99a8c4825f13a25a13a15a2606958e0bf07bc226f8599b99"} Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.277765 4796 scope.go:117] "RemoveContainer" containerID="2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.295895 4796 scope.go:117] "RemoveContainer" containerID="9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.311009 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47pq5"] Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.328207 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47pq5"] Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.340765 4796 scope.go:117] "RemoveContainer" containerID="38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.364691 4796 scope.go:117] "RemoveContainer" containerID="2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30" Dec 12 04:48:21 crc kubenswrapper[4796]: E1212 04:48:21.369422 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30\": container with ID starting with 2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30 not found: ID does not exist" containerID="2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.369473 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30"} err="failed to get container status \"2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30\": rpc error: code = NotFound desc = could not find container \"2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30\": container with ID starting with 2ff4e0332bdb7c18611936bc9fd6fb58dfa2e73a5b997627e901e3bb65813e30 not found: ID does not exist" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.369497 4796 scope.go:117] "RemoveContainer" containerID="9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868" Dec 12 04:48:21 crc kubenswrapper[4796]: E1212 04:48:21.369897 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868\": container with ID starting with 9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868 not found: ID does not exist" containerID="9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.369935 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868"} err="failed to get container status \"9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868\": rpc error: code = NotFound desc = could not find container \"9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868\": container with ID starting with 9cba79b57cfcdda402f36eaf3d02135f400447630e259f0172f2c99fa7d20868 not found: ID does not exist" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.369963 4796 scope.go:117] "RemoveContainer" containerID="38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9" Dec 12 04:48:21 crc kubenswrapper[4796]: E1212 04:48:21.370310 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9\": container with ID starting with 38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9 not found: ID does not exist" containerID="38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.370333 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9"} err="failed to get container status \"38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9\": rpc error: code = NotFound desc = could not find container \"38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9\": container with ID starting with 38f24514a806b87b0352fdc72eed8cd394bab80f97fb8128438c5078679b90f9 not found: ID does not exist" Dec 12 04:48:21 crc kubenswrapper[4796]: I1212 04:48:21.418893 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" path="/var/lib/kubelet/pods/059e0d1e-00e9-444f-8343-5991d61e7e9e/volumes" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.212320 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m"] Dec 12 04:48:23 crc kubenswrapper[4796]: E1212 04:48:23.212881 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="extract-utilities" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.212899 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="extract-utilities" Dec 12 04:48:23 crc kubenswrapper[4796]: E1212 04:48:23.212919 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="extract" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.212927 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="extract" Dec 12 04:48:23 crc kubenswrapper[4796]: E1212 04:48:23.212939 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="registry-server" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.212949 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="registry-server" Dec 12 04:48:23 crc kubenswrapper[4796]: E1212 04:48:23.212967 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="pull" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.212974 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="pull" Dec 12 04:48:23 crc kubenswrapper[4796]: E1212 04:48:23.212982 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="util" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.212989 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="util" Dec 12 04:48:23 crc kubenswrapper[4796]: E1212 04:48:23.213001 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="extract-content" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.213010 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="extract-content" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.213151 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef694040-71d2-464d-b70b-15b0ab44a2d8" containerName="extract" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.213176 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="059e0d1e-00e9-444f-8343-5991d61e7e9e" containerName="registry-server" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.213679 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.218243 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-8rk4v" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.231593 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m"] Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.339626 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bzr\" (UniqueName: \"kubernetes.io/projected/2f68b517-0566-4a78-92bd-215a5b6e304b-kube-api-access-p2bzr\") pod \"openstack-operator-controller-operator-7d67f9f647-pf79m\" (UID: \"2f68b517-0566-4a78-92bd-215a5b6e304b\") " pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.440584 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2bzr\" (UniqueName: \"kubernetes.io/projected/2f68b517-0566-4a78-92bd-215a5b6e304b-kube-api-access-p2bzr\") pod \"openstack-operator-controller-operator-7d67f9f647-pf79m\" (UID: \"2f68b517-0566-4a78-92bd-215a5b6e304b\") " pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.460049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2bzr\" (UniqueName: \"kubernetes.io/projected/2f68b517-0566-4a78-92bd-215a5b6e304b-kube-api-access-p2bzr\") pod \"openstack-operator-controller-operator-7d67f9f647-pf79m\" (UID: \"2f68b517-0566-4a78-92bd-215a5b6e304b\") " pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.534653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:23 crc kubenswrapper[4796]: I1212 04:48:23.786960 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m"] Dec 12 04:48:24 crc kubenswrapper[4796]: I1212 04:48:24.297553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" event={"ID":"2f68b517-0566-4a78-92bd-215a5b6e304b","Type":"ContainerStarted","Data":"6eb7364b5aa1fe6e440ca657b987057ae26a63e18fea8fef3e84a886949550f9"} Dec 12 04:48:24 crc kubenswrapper[4796]: I1212 04:48:24.354573 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:24 crc kubenswrapper[4796]: I1212 04:48:24.354629 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:24 crc kubenswrapper[4796]: I1212 04:48:24.403026 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:25 crc kubenswrapper[4796]: I1212 04:48:25.338843 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:27 crc kubenswrapper[4796]: I1212 04:48:27.214772 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ncl4"] Dec 12 04:48:27 crc kubenswrapper[4796]: I1212 04:48:27.317706 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8ncl4" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="registry-server" containerID="cri-o://ec6ec3445a7b3dd8eb9a3d73d7e27f0f56351a9e46fa27fed649c28c8df721d6" gracePeriod=2 Dec 12 04:48:28 crc kubenswrapper[4796]: I1212 04:48:28.325711 4796 generic.go:334] "Generic (PLEG): container finished" podID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerID="ec6ec3445a7b3dd8eb9a3d73d7e27f0f56351a9e46fa27fed649c28c8df721d6" exitCode=0 Dec 12 04:48:28 crc kubenswrapper[4796]: I1212 04:48:28.325735 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerDied","Data":"ec6ec3445a7b3dd8eb9a3d73d7e27f0f56351a9e46fa27fed649c28c8df721d6"} Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.068260 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.207110 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-catalog-content\") pod \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.207148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-utilities\") pod \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.207170 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rlk9\" (UniqueName: \"kubernetes.io/projected/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-kube-api-access-6rlk9\") pod \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\" (UID: \"5f6dfb68-d1f4-4573-95f1-432fc14f6a22\") " Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.210170 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-utilities" (OuterVolumeSpecName: "utilities") pod "5f6dfb68-d1f4-4573-95f1-432fc14f6a22" (UID: "5f6dfb68-d1f4-4573-95f1-432fc14f6a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.212737 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-kube-api-access-6rlk9" (OuterVolumeSpecName: "kube-api-access-6rlk9") pod "5f6dfb68-d1f4-4573-95f1-432fc14f6a22" (UID: "5f6dfb68-d1f4-4573-95f1-432fc14f6a22"). InnerVolumeSpecName "kube-api-access-6rlk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.240514 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f6dfb68-d1f4-4573-95f1-432fc14f6a22" (UID: "5f6dfb68-d1f4-4573-95f1-432fc14f6a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.307856 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.307888 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.307898 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rlk9\" (UniqueName: \"kubernetes.io/projected/5f6dfb68-d1f4-4573-95f1-432fc14f6a22-kube-api-access-6rlk9\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.337325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" event={"ID":"2f68b517-0566-4a78-92bd-215a5b6e304b","Type":"ContainerStarted","Data":"7315caad05bc0eb41aeec429aa9f87b06beebc11037f7b7e13b91f438636c8d7"} Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.338260 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.339879 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ncl4" event={"ID":"5f6dfb68-d1f4-4573-95f1-432fc14f6a22","Type":"ContainerDied","Data":"c636076a36c445237aa34cfeaeb9105c3ac679906b6f48d614c0dd9b6c2a490c"} Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.339909 4796 scope.go:117] "RemoveContainer" containerID="ec6ec3445a7b3dd8eb9a3d73d7e27f0f56351a9e46fa27fed649c28c8df721d6" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.340008 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ncl4" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.359642 4796 scope.go:117] "RemoveContainer" containerID="1aa3fef9f20910d445519a5bc2c6590191881ab7350c07cdd2e3a9879a19fa20" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.388443 4796 scope.go:117] "RemoveContainer" containerID="c009db87efd1b0ee9e0ee7ff1dabcb098f244a9abaadb2c757f50c14ba2075d2" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.404527 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" podStartSLOduration=1.09452291 podStartE2EDuration="7.404505961s" podCreationTimestamp="2025-12-12 04:48:23 +0000 UTC" firstStartedPulling="2025-12-12 04:48:23.803726416 +0000 UTC m=+894.679743563" lastFinishedPulling="2025-12-12 04:48:30.113709457 +0000 UTC m=+900.989726614" observedRunningTime="2025-12-12 04:48:30.38742245 +0000 UTC m=+901.263439597" watchObservedRunningTime="2025-12-12 04:48:30.404505961 +0000 UTC m=+901.280523108" Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.409111 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ncl4"] Dec 12 04:48:30 crc kubenswrapper[4796]: I1212 04:48:30.431446 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ncl4"] Dec 12 04:48:31 crc kubenswrapper[4796]: I1212 04:48:31.419790 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" path="/var/lib/kubelet/pods/5f6dfb68-d1f4-4573-95f1-432fc14f6a22/volumes" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.352555 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-skqv5"] Dec 12 04:48:43 crc kubenswrapper[4796]: E1212 04:48:43.353340 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="registry-server" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.353356 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="registry-server" Dec 12 04:48:43 crc kubenswrapper[4796]: E1212 04:48:43.353370 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="extract-utilities" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.353379 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="extract-utilities" Dec 12 04:48:43 crc kubenswrapper[4796]: E1212 04:48:43.353403 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="extract-content" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.353412 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="extract-content" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.353543 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6dfb68-d1f4-4573-95f1-432fc14f6a22" containerName="registry-server" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.354529 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.364752 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skqv5"] Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.369566 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-utilities\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.369639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-catalog-content\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.369667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qlwn\" (UniqueName: \"kubernetes.io/projected/5a06ab65-680e-410e-84ec-04dc0237dfa6-kube-api-access-7qlwn\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.472192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-utilities\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.472250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-catalog-content\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.472267 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qlwn\" (UniqueName: \"kubernetes.io/projected/5a06ab65-680e-410e-84ec-04dc0237dfa6-kube-api-access-7qlwn\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.473589 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-catalog-content\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.473641 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-utilities\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.498137 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qlwn\" (UniqueName: \"kubernetes.io/projected/5a06ab65-680e-410e-84ec-04dc0237dfa6-kube-api-access-7qlwn\") pod \"certified-operators-skqv5\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.537616 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7d67f9f647-pf79m" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.673638 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:43 crc kubenswrapper[4796]: I1212 04:48:43.971108 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skqv5"] Dec 12 04:48:44 crc kubenswrapper[4796]: I1212 04:48:44.427401 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerID="2b4a502e426bd594eb97cfdbeb3b5408103549c1488d02c6df664ce83751dd8e" exitCode=0 Dec 12 04:48:44 crc kubenswrapper[4796]: I1212 04:48:44.427444 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerDied","Data":"2b4a502e426bd594eb97cfdbeb3b5408103549c1488d02c6df664ce83751dd8e"} Dec 12 04:48:44 crc kubenswrapper[4796]: I1212 04:48:44.427476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerStarted","Data":"0f24f29c11ebcb00d0105b39a027e80494abfb0c1246d6f1b9884093a3a52c53"} Dec 12 04:48:45 crc kubenswrapper[4796]: I1212 04:48:45.450477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerStarted","Data":"b63cb68c7f2c7ccfc41ee4cf70f1ca22e6c7bed2b79deccca3547ad8c56ca559"} Dec 12 04:48:46 crc kubenswrapper[4796]: I1212 04:48:46.455332 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerID="b63cb68c7f2c7ccfc41ee4cf70f1ca22e6c7bed2b79deccca3547ad8c56ca559" exitCode=0 Dec 12 04:48:46 crc kubenswrapper[4796]: I1212 04:48:46.455374 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerDied","Data":"b63cb68c7f2c7ccfc41ee4cf70f1ca22e6c7bed2b79deccca3547ad8c56ca559"} Dec 12 04:48:47 crc kubenswrapper[4796]: I1212 04:48:47.463299 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerStarted","Data":"c6f8e633940ca0251aebe16f4c058dbb7ef8aefe80e739db44d97f3cd99da285"} Dec 12 04:48:47 crc kubenswrapper[4796]: I1212 04:48:47.484370 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-skqv5" podStartSLOduration=2.017770732 podStartE2EDuration="4.484351266s" podCreationTimestamp="2025-12-12 04:48:43 +0000 UTC" firstStartedPulling="2025-12-12 04:48:44.428896467 +0000 UTC m=+915.304913614" lastFinishedPulling="2025-12-12 04:48:46.895477001 +0000 UTC m=+917.771494148" observedRunningTime="2025-12-12 04:48:47.481036781 +0000 UTC m=+918.357053928" watchObservedRunningTime="2025-12-12 04:48:47.484351266 +0000 UTC m=+918.360368413" Dec 12 04:48:53 crc kubenswrapper[4796]: I1212 04:48:53.674793 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:53 crc kubenswrapper[4796]: I1212 04:48:53.676604 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:53 crc kubenswrapper[4796]: I1212 04:48:53.796803 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:54 crc kubenswrapper[4796]: I1212 04:48:54.709929 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:54 crc kubenswrapper[4796]: I1212 04:48:54.816070 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-skqv5"] Dec 12 04:48:56 crc kubenswrapper[4796]: I1212 04:48:56.669060 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-skqv5" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="registry-server" containerID="cri-o://c6f8e633940ca0251aebe16f4c058dbb7ef8aefe80e739db44d97f3cd99da285" gracePeriod=2 Dec 12 04:48:57 crc kubenswrapper[4796]: I1212 04:48:57.676106 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerID="c6f8e633940ca0251aebe16f4c058dbb7ef8aefe80e739db44d97f3cd99da285" exitCode=0 Dec 12 04:48:57 crc kubenswrapper[4796]: I1212 04:48:57.676192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerDied","Data":"c6f8e633940ca0251aebe16f4c058dbb7ef8aefe80e739db44d97f3cd99da285"} Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.231865 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.331108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-utilities\") pod \"5a06ab65-680e-410e-84ec-04dc0237dfa6\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.331156 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qlwn\" (UniqueName: \"kubernetes.io/projected/5a06ab65-680e-410e-84ec-04dc0237dfa6-kube-api-access-7qlwn\") pod \"5a06ab65-680e-410e-84ec-04dc0237dfa6\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.331248 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-catalog-content\") pod \"5a06ab65-680e-410e-84ec-04dc0237dfa6\" (UID: \"5a06ab65-680e-410e-84ec-04dc0237dfa6\") " Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.331888 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-utilities" (OuterVolumeSpecName: "utilities") pod "5a06ab65-680e-410e-84ec-04dc0237dfa6" (UID: "5a06ab65-680e-410e-84ec-04dc0237dfa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.339458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a06ab65-680e-410e-84ec-04dc0237dfa6-kube-api-access-7qlwn" (OuterVolumeSpecName: "kube-api-access-7qlwn") pod "5a06ab65-680e-410e-84ec-04dc0237dfa6" (UID: "5a06ab65-680e-410e-84ec-04dc0237dfa6"). InnerVolumeSpecName "kube-api-access-7qlwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.399051 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a06ab65-680e-410e-84ec-04dc0237dfa6" (UID: "5a06ab65-680e-410e-84ec-04dc0237dfa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.432628 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.432653 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qlwn\" (UniqueName: \"kubernetes.io/projected/5a06ab65-680e-410e-84ec-04dc0237dfa6-kube-api-access-7qlwn\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.432666 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a06ab65-680e-410e-84ec-04dc0237dfa6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.687477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skqv5" event={"ID":"5a06ab65-680e-410e-84ec-04dc0237dfa6","Type":"ContainerDied","Data":"0f24f29c11ebcb00d0105b39a027e80494abfb0c1246d6f1b9884093a3a52c53"} Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.687526 4796 scope.go:117] "RemoveContainer" containerID="c6f8e633940ca0251aebe16f4c058dbb7ef8aefe80e739db44d97f3cd99da285" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.687621 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skqv5" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.715762 4796 scope.go:117] "RemoveContainer" containerID="b63cb68c7f2c7ccfc41ee4cf70f1ca22e6c7bed2b79deccca3547ad8c56ca559" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.739832 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-skqv5"] Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.745213 4796 scope.go:117] "RemoveContainer" containerID="2b4a502e426bd594eb97cfdbeb3b5408103549c1488d02c6df664ce83751dd8e" Dec 12 04:48:59 crc kubenswrapper[4796]: I1212 04:48:59.806561 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-skqv5"] Dec 12 04:49:01 crc kubenswrapper[4796]: I1212 04:49:01.419134 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" path="/var/lib/kubelet/pods/5a06ab65-680e-410e-84ec-04dc0237dfa6/volumes" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.068171 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97"] Dec 12 04:49:03 crc kubenswrapper[4796]: E1212 04:49:03.068561 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="registry-server" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.068577 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="registry-server" Dec 12 04:49:03 crc kubenswrapper[4796]: E1212 04:49:03.068595 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="extract-utilities" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.068603 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="extract-utilities" Dec 12 04:49:03 crc kubenswrapper[4796]: E1212 04:49:03.068614 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="extract-content" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.068622 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="extract-content" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.068780 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a06ab65-680e-410e-84ec-04dc0237dfa6" containerName="registry-server" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.069712 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.072305 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-h4dq9" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.088011 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.089571 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.092877 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9ns7c" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.096604 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.103228 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.105036 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.108916 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ht2vh" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.114133 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.144148 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.180233 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.186724 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.197470 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-twgc8" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.205573 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqbh\" (UniqueName: \"kubernetes.io/projected/f4b37e55-be7c-467b-9739-e82c28f1916e-kube-api-access-7tqbh\") pod \"barbican-operator-controller-manager-7d9dfd778-f9h97\" (UID: \"f4b37e55-be7c-467b-9739-e82c28f1916e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.205634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67z2\" (UniqueName: \"kubernetes.io/projected/565c4c89-1b44-462b-8307-15d3d0a6cf1f-kube-api-access-x67z2\") pod \"designate-operator-controller-manager-697fb699cf-qzrqh\" (UID: \"565c4c89-1b44-462b-8307-15d3d0a6cf1f\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.205681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtzrk\" (UniqueName: \"kubernetes.io/projected/19b30665-06c6-48e5-8ec7-3eeaf3d3e72e-kube-api-access-xtzrk\") pod \"cinder-operator-controller-manager-6c677c69b-27f9h\" (UID: \"19b30665-06c6-48e5-8ec7-3eeaf3d3e72e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.225929 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.257425 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.258551 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.261422 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.262449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.262968 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-sxqdw" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.267351 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p6gv4" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.291282 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.295186 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.300884 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.302025 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.307713 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.307884 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t922h" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.314973 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vjxs\" (UniqueName: \"kubernetes.io/projected/22df48e7-88f5-43df-bdce-9116599bea1b-kube-api-access-2vjxs\") pod \"glance-operator-controller-manager-5697bb5779-mdmcv\" (UID: \"22df48e7-88f5-43df-bdce-9116599bea1b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.315142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqbh\" (UniqueName: \"kubernetes.io/projected/f4b37e55-be7c-467b-9739-e82c28f1916e-kube-api-access-7tqbh\") pod \"barbican-operator-controller-manager-7d9dfd778-f9h97\" (UID: \"f4b37e55-be7c-467b-9739-e82c28f1916e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.315169 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x67z2\" (UniqueName: \"kubernetes.io/projected/565c4c89-1b44-462b-8307-15d3d0a6cf1f-kube-api-access-x67z2\") pod \"designate-operator-controller-manager-697fb699cf-qzrqh\" (UID: \"565c4c89-1b44-462b-8307-15d3d0a6cf1f\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.316344 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtzrk\" (UniqueName: \"kubernetes.io/projected/19b30665-06c6-48e5-8ec7-3eeaf3d3e72e-kube-api-access-xtzrk\") pod \"cinder-operator-controller-manager-6c677c69b-27f9h\" (UID: \"19b30665-06c6-48e5-8ec7-3eeaf3d3e72e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.316375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7bf4\" (UniqueName: \"kubernetes.io/projected/035421c3-b1dd-48de-a195-04bfef7c5a0e-kube-api-access-t7bf4\") pod \"horizon-operator-controller-manager-68c6d99b8f-5v7h7\" (UID: \"035421c3-b1dd-48de-a195-04bfef7c5a0e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.317398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggk47\" (UniqueName: \"kubernetes.io/projected/3092bc98-4221-47ff-bae0-06efcfa85522-kube-api-access-ggk47\") pod \"heat-operator-controller-manager-5f64f6f8bb-jqd2f\" (UID: \"3092bc98-4221-47ff-bae0-06efcfa85522\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.326451 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.345719 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.346900 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.353296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqbh\" (UniqueName: \"kubernetes.io/projected/f4b37e55-be7c-467b-9739-e82c28f1916e-kube-api-access-7tqbh\") pod \"barbican-operator-controller-manager-7d9dfd778-f9h97\" (UID: \"f4b37e55-be7c-467b-9739-e82c28f1916e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.357821 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-xv9vv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.372992 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x67z2\" (UniqueName: \"kubernetes.io/projected/565c4c89-1b44-462b-8307-15d3d0a6cf1f-kube-api-access-x67z2\") pod \"designate-operator-controller-manager-697fb699cf-qzrqh\" (UID: \"565c4c89-1b44-462b-8307-15d3d0a6cf1f\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.383972 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtzrk\" (UniqueName: \"kubernetes.io/projected/19b30665-06c6-48e5-8ec7-3eeaf3d3e72e-kube-api-access-xtzrk\") pod \"cinder-operator-controller-manager-6c677c69b-27f9h\" (UID: \"19b30665-06c6-48e5-8ec7-3eeaf3d3e72e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.393591 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.399941 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.406880 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.428198 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhsj\" (UniqueName: \"kubernetes.io/projected/c14d829a-f63e-404c-b117-65c0e15280e8-kube-api-access-5jhsj\") pod \"ironic-operator-controller-manager-967d97867-w5fjz\" (UID: \"c14d829a-f63e-404c-b117-65c0e15280e8\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.428263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7bf4\" (UniqueName: \"kubernetes.io/projected/035421c3-b1dd-48de-a195-04bfef7c5a0e-kube-api-access-t7bf4\") pod \"horizon-operator-controller-manager-68c6d99b8f-5v7h7\" (UID: \"035421c3-b1dd-48de-a195-04bfef7c5a0e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.428289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.428361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggk47\" (UniqueName: \"kubernetes.io/projected/3092bc98-4221-47ff-bae0-06efcfa85522-kube-api-access-ggk47\") pod \"heat-operator-controller-manager-5f64f6f8bb-jqd2f\" (UID: \"3092bc98-4221-47ff-bae0-06efcfa85522\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.428386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8c9s\" (UniqueName: \"kubernetes.io/projected/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-kube-api-access-b8c9s\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.428412 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vjxs\" (UniqueName: \"kubernetes.io/projected/22df48e7-88f5-43df-bdce-9116599bea1b-kube-api-access-2vjxs\") pod \"glance-operator-controller-manager-5697bb5779-mdmcv\" (UID: \"22df48e7-88f5-43df-bdce-9116599bea1b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.449348 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.456994 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.458035 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.458920 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.459268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.466435 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-5kl4d" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.489770 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-69bkg" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.493078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vjxs\" (UniqueName: \"kubernetes.io/projected/22df48e7-88f5-43df-bdce-9116599bea1b-kube-api-access-2vjxs\") pod \"glance-operator-controller-manager-5697bb5779-mdmcv\" (UID: \"22df48e7-88f5-43df-bdce-9116599bea1b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.501243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggk47\" (UniqueName: \"kubernetes.io/projected/3092bc98-4221-47ff-bae0-06efcfa85522-kube-api-access-ggk47\") pod \"heat-operator-controller-manager-5f64f6f8bb-jqd2f\" (UID: \"3092bc98-4221-47ff-bae0-06efcfa85522\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.506630 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.524068 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.531681 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7bf4\" (UniqueName: \"kubernetes.io/projected/035421c3-b1dd-48de-a195-04bfef7c5a0e-kube-api-access-t7bf4\") pod \"horizon-operator-controller-manager-68c6d99b8f-5v7h7\" (UID: \"035421c3-b1dd-48de-a195-04bfef7c5a0e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.534536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8c9s\" (UniqueName: \"kubernetes.io/projected/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-kube-api-access-b8c9s\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.534686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhsj\" (UniqueName: \"kubernetes.io/projected/c14d829a-f63e-404c-b117-65c0e15280e8-kube-api-access-5jhsj\") pod \"ironic-operator-controller-manager-967d97867-w5fjz\" (UID: \"c14d829a-f63e-404c-b117-65c0e15280e8\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.534774 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nplw2\" (UniqueName: \"kubernetes.io/projected/9fb465c9-338c-4755-ba24-b7985e57fa06-kube-api-access-nplw2\") pod \"manila-operator-controller-manager-5b5fd79c9c-47x2m\" (UID: \"9fb465c9-338c-4755-ba24-b7985e57fa06\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.534883 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.535026 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wws\" (UniqueName: \"kubernetes.io/projected/b47de1f3-3223-47bb-a707-72ee23490049-kube-api-access-l2wws\") pod \"keystone-operator-controller-manager-7765d96ddf-6dsjv\" (UID: \"b47de1f3-3223-47bb-a707-72ee23490049\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:49:03 crc kubenswrapper[4796]: E1212 04:49:03.536123 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:03 crc kubenswrapper[4796]: E1212 04:49:03.536244 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert podName:301fd006-5a61-46bd-b19f-bbd1ba8f7baf nodeName:}" failed. No retries permitted until 2025-12-12 04:49:04.036227455 +0000 UTC m=+934.912244602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert") pod "infra-operator-controller-manager-78d48bff9d-lzzhj" (UID: "301fd006-5a61-46bd-b19f-bbd1ba8f7baf") : secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.544142 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.561199 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.562212 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.566486 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.567284 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.574202 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.575512 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.578589 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tlwxd" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.578845 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-cz2kv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.584537 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tv8nt" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.588296 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.600420 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.602877 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.604168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.606197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8c9s\" (UniqueName: \"kubernetes.io/projected/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-kube-api-access-b8c9s\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.606556 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p7h2q" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.607681 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.608844 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhsj\" (UniqueName: \"kubernetes.io/projected/c14d829a-f63e-404c-b117-65c0e15280e8-kube-api-access-5jhsj\") pod \"ironic-operator-controller-manager-967d97867-w5fjz\" (UID: \"c14d829a-f63e-404c-b117-65c0e15280e8\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.635991 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wws\" (UniqueName: \"kubernetes.io/projected/b47de1f3-3223-47bb-a707-72ee23490049-kube-api-access-l2wws\") pod \"keystone-operator-controller-manager-7765d96ddf-6dsjv\" (UID: \"b47de1f3-3223-47bb-a707-72ee23490049\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.636046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nplw2\" (UniqueName: \"kubernetes.io/projected/9fb465c9-338c-4755-ba24-b7985e57fa06-kube-api-access-nplw2\") pod \"manila-operator-controller-manager-5b5fd79c9c-47x2m\" (UID: \"9fb465c9-338c-4755-ba24-b7985e57fa06\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.636107 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncsh\" (UniqueName: \"kubernetes.io/projected/a0340c55-5a39-4841-a602-694ef484e3ec-kube-api-access-wncsh\") pod \"nova-operator-controller-manager-697bc559fc-n6m6j\" (UID: \"a0340c55-5a39-4841-a602-694ef484e3ec\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.636125 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjfr\" (UniqueName: \"kubernetes.io/projected/43ac4ab3-1f18-4b18-8a83-1561837988eb-kube-api-access-qpjfr\") pod \"mariadb-operator-controller-manager-79c8c4686c-ws54v\" (UID: \"43ac4ab3-1f18-4b18-8a83-1561837988eb\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.636164 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnr2z\" (UniqueName: \"kubernetes.io/projected/8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73-kube-api-access-dnr2z\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-2dggq\" (UID: \"8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.636979 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.646424 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.682059 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.687861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nplw2\" (UniqueName: \"kubernetes.io/projected/9fb465c9-338c-4755-ba24-b7985e57fa06-kube-api-access-nplw2\") pod \"manila-operator-controller-manager-5b5fd79c9c-47x2m\" (UID: \"9fb465c9-338c-4755-ba24-b7985e57fa06\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.694776 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wws\" (UniqueName: \"kubernetes.io/projected/b47de1f3-3223-47bb-a707-72ee23490049-kube-api-access-l2wws\") pod \"keystone-operator-controller-manager-7765d96ddf-6dsjv\" (UID: \"b47de1f3-3223-47bb-a707-72ee23490049\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.707186 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.711065 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.719105 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.722465 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d6bbs" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.771468 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.779395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxckw\" (UniqueName: \"kubernetes.io/projected/6a645239-185e-4bfb-b8a8-9c442ae1c379-kube-api-access-qxckw\") pod \"octavia-operator-controller-manager-998648c74-kgq7g\" (UID: \"6a645239-185e-4bfb-b8a8-9c442ae1c379\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.779610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncsh\" (UniqueName: \"kubernetes.io/projected/a0340c55-5a39-4841-a602-694ef484e3ec-kube-api-access-wncsh\") pod \"nova-operator-controller-manager-697bc559fc-n6m6j\" (UID: \"a0340c55-5a39-4841-a602-694ef484e3ec\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.791538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjfr\" (UniqueName: \"kubernetes.io/projected/43ac4ab3-1f18-4b18-8a83-1561837988eb-kube-api-access-qpjfr\") pod \"mariadb-operator-controller-manager-79c8c4686c-ws54v\" (UID: \"43ac4ab3-1f18-4b18-8a83-1561837988eb\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.791706 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.791711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnr2z\" (UniqueName: \"kubernetes.io/projected/8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73-kube-api-access-dnr2z\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-2dggq\" (UID: \"8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.796422 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4c57n" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.817688 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.818333 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.842991 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.859742 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.887767 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjfr\" (UniqueName: \"kubernetes.io/projected/43ac4ab3-1f18-4b18-8a83-1561837988eb-kube-api-access-qpjfr\") pod \"mariadb-operator-controller-manager-79c8c4686c-ws54v\" (UID: \"43ac4ab3-1f18-4b18-8a83-1561837988eb\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.892017 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnr2z\" (UniqueName: \"kubernetes.io/projected/8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73-kube-api-access-dnr2z\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-2dggq\" (UID: \"8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.895508 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p59p\" (UniqueName: \"kubernetes.io/projected/bb156fa4-57d0-457f-be10-e9c013f37a84-kube-api-access-8p59p\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.896586 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxckw\" (UniqueName: \"kubernetes.io/projected/6a645239-185e-4bfb-b8a8-9c442ae1c379-kube-api-access-qxckw\") pod \"octavia-operator-controller-manager-998648c74-kgq7g\" (UID: \"6a645239-185e-4bfb-b8a8-9c442ae1c379\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.896754 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.896820 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7b7n\" (UniqueName: \"kubernetes.io/projected/c3102315-cf09-47e4-b1b2-4721b38ac5b8-kube-api-access-j7b7n\") pod \"ovn-operator-controller-manager-b6456fdb6-wdgv6\" (UID: \"c3102315-cf09-47e4-b1b2-4721b38ac5b8\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.902498 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.904334 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncsh\" (UniqueName: \"kubernetes.io/projected/a0340c55-5a39-4841-a602-694ef484e3ec-kube-api-access-wncsh\") pod \"nova-operator-controller-manager-697bc559fc-n6m6j\" (UID: \"a0340c55-5a39-4841-a602-694ef484e3ec\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.975510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxckw\" (UniqueName: \"kubernetes.io/projected/6a645239-185e-4bfb-b8a8-9c442ae1c379-kube-api-access-qxckw\") pod \"octavia-operator-controller-manager-998648c74-kgq7g\" (UID: \"6a645239-185e-4bfb-b8a8-9c442ae1c379\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.975814 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.977288 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.980539 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6"] Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.982225 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.984972 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.985492 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vw8c6" Dec 12 04:49:03 crc kubenswrapper[4796]: I1212 04:49:03.987907 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:03.998139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqgxk\" (UniqueName: \"kubernetes.io/projected/7f217e33-5880-42b4-931f-8a4633195ffc-kube-api-access-tqgxk\") pod \"placement-operator-controller-manager-78f8948974-8k4ws\" (UID: \"7f217e33-5880-42b4-931f-8a4633195ffc\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:03.998201 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p59p\" (UniqueName: \"kubernetes.io/projected/bb156fa4-57d0-457f-be10-e9c013f37a84-kube-api-access-8p59p\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:03.998269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:03.998298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7b7n\" (UniqueName: \"kubernetes.io/projected/c3102315-cf09-47e4-b1b2-4721b38ac5b8-kube-api-access-j7b7n\") pod \"ovn-operator-controller-manager-b6456fdb6-wdgv6\" (UID: \"c3102315-cf09-47e4-b1b2-4721b38ac5b8\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:03.998692 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:03.998727 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert podName:bb156fa4-57d0-457f-be10-e9c013f37a84 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:04.498714481 +0000 UTC m=+935.374731628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8bscf" (UID: "bb156fa4-57d0-457f-be10-e9c013f37a84") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:03.999077 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.000195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.015611 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dh7v4" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.042104 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7b7n\" (UniqueName: \"kubernetes.io/projected/c3102315-cf09-47e4-b1b2-4721b38ac5b8-kube-api-access-j7b7n\") pod \"ovn-operator-controller-manager-b6456fdb6-wdgv6\" (UID: \"c3102315-cf09-47e4-b1b2-4721b38ac5b8\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.059591 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.086876 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p59p\" (UniqueName: \"kubernetes.io/projected/bb156fa4-57d0-457f-be10-e9c013f37a84-kube-api-access-8p59p\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.100370 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqgxk\" (UniqueName: \"kubernetes.io/projected/7f217e33-5880-42b4-931f-8a4633195ffc-kube-api-access-tqgxk\") pod \"placement-operator-controller-manager-78f8948974-8k4ws\" (UID: \"7f217e33-5880-42b4-931f-8a4633195ffc\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.100417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4kg\" (UniqueName: \"kubernetes.io/projected/252d73ba-87e9-492d-a9c4-2f8e4e8d66fa-kube-api-access-sx4kg\") pod \"swift-operator-controller-manager-9d58d64bc-rpkcq\" (UID: \"252d73ba-87e9-492d-a9c4-2f8e4e8d66fa\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.100463 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.100623 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.100957 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert podName:301fd006-5a61-46bd-b19f-bbd1ba8f7baf nodeName:}" failed. No retries permitted until 2025-12-12 04:49:05.100939667 +0000 UTC m=+935.976956814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert") pod "infra-operator-controller-manager-78d48bff9d-lzzhj" (UID: "301fd006-5a61-46bd-b19f-bbd1ba8f7baf") : secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.139402 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.146131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.148104 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqgxk\" (UniqueName: \"kubernetes.io/projected/7f217e33-5880-42b4-931f-8a4633195ffc-kube-api-access-tqgxk\") pod \"placement-operator-controller-manager-78f8948974-8k4ws\" (UID: \"7f217e33-5880-42b4-931f-8a4633195ffc\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.178585 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.180377 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.189405 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9l7rq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.201251 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx4kg\" (UniqueName: \"kubernetes.io/projected/252d73ba-87e9-492d-a9c4-2f8e4e8d66fa-kube-api-access-sx4kg\") pod \"swift-operator-controller-manager-9d58d64bc-rpkcq\" (UID: \"252d73ba-87e9-492d-a9c4-2f8e4e8d66fa\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.201294 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77p8\" (UniqueName: \"kubernetes.io/projected/38f86aeb-2024-40b1-8c60-25c2c78ef7ac-kube-api-access-p77p8\") pod \"telemetry-operator-controller-manager-58d5ff84df-gvgzg\" (UID: \"38f86aeb-2024-40b1-8c60-25c2c78ef7ac\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.208058 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-jh5td"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.209542 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.221613 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lz28n" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.226470 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.242686 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.243800 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.249571 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-jh5td"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.254584 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.254789 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx4kg\" (UniqueName: \"kubernetes.io/projected/252d73ba-87e9-492d-a9c4-2f8e4e8d66fa-kube-api-access-sx4kg\") pod \"swift-operator-controller-manager-9d58d64bc-rpkcq\" (UID: \"252d73ba-87e9-492d-a9c4-2f8e4e8d66fa\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.258760 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9s49m" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.310581 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9fxq\" (UniqueName: \"kubernetes.io/projected/bbeb9b29-5dc1-4cdf-94de-397cdb4a32de-kube-api-access-c9fxq\") pod \"test-operator-controller-manager-5854674fcc-jh5td\" (UID: \"bbeb9b29-5dc1-4cdf-94de-397cdb4a32de\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.310816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p77p8\" (UniqueName: \"kubernetes.io/projected/38f86aeb-2024-40b1-8c60-25c2c78ef7ac-kube-api-access-p77p8\") pod \"telemetry-operator-controller-manager-58d5ff84df-gvgzg\" (UID: \"38f86aeb-2024-40b1-8c60-25c2c78ef7ac\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.310875 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtjz\" (UniqueName: \"kubernetes.io/projected/cd307815-1f04-446d-a89b-60fa6574f0db-kube-api-access-ghtjz\") pod \"watcher-operator-controller-manager-75944c9b7-mt59j\" (UID: \"cd307815-1f04-446d-a89b-60fa6574f0db\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.313393 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.314293 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.317635 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-m6l4g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.317863 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.318074 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.330940 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.344406 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77p8\" (UniqueName: \"kubernetes.io/projected/38f86aeb-2024-40b1-8c60-25c2c78ef7ac-kube-api-access-p77p8\") pod \"telemetry-operator-controller-manager-58d5ff84df-gvgzg\" (UID: \"38f86aeb-2024-40b1-8c60-25c2c78ef7ac\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.366187 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.371885 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.394198 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.413822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87pl\" (UniqueName: \"kubernetes.io/projected/f2d005ee-450b-4029-bb3c-a5b389edc347-kube-api-access-z87pl\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.413871 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.413896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghtjz\" (UniqueName: \"kubernetes.io/projected/cd307815-1f04-446d-a89b-60fa6574f0db-kube-api-access-ghtjz\") pod \"watcher-operator-controller-manager-75944c9b7-mt59j\" (UID: \"cd307815-1f04-446d-a89b-60fa6574f0db\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.413920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9fxq\" (UniqueName: \"kubernetes.io/projected/bbeb9b29-5dc1-4cdf-94de-397cdb4a32de-kube-api-access-c9fxq\") pod \"test-operator-controller-manager-5854674fcc-jh5td\" (UID: \"bbeb9b29-5dc1-4cdf-94de-397cdb4a32de\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.413996 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.464354 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.465204 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.476403 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.505394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.515668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggzg5\" (UniqueName: \"kubernetes.io/projected/0d5457f7-3a7d-4a0e-a733-33c78860c9b5-kube-api-access-ggzg5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jz9xq\" (UID: \"0d5457f7-3a7d-4a0e-a733-33c78860c9b5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.515715 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.515974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87pl\" (UniqueName: \"kubernetes.io/projected/f2d005ee-450b-4029-bb3c-a5b389edc347-kube-api-access-z87pl\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.516023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.516341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.516545 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.516634 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:05.016618196 +0000 UTC m=+935.892635343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.516703 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.516727 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert podName:bb156fa4-57d0-457f-be10-e9c013f37a84 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:05.516718819 +0000 UTC m=+936.392735966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8bscf" (UID: "bb156fa4-57d0-457f-be10-e9c013f37a84") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.517334 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: E1212 04:49:04.517447 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:05.017412491 +0000 UTC m=+935.893429638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "metrics-server-cert" not found Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.596301 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.611111 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nhbqh" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.617487 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggzg5\" (UniqueName: \"kubernetes.io/projected/0d5457f7-3a7d-4a0e-a733-33c78860c9b5-kube-api-access-ggzg5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jz9xq\" (UID: \"0d5457f7-3a7d-4a0e-a733-33c78860c9b5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.626329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghtjz\" (UniqueName: \"kubernetes.io/projected/cd307815-1f04-446d-a89b-60fa6574f0db-kube-api-access-ghtjz\") pod \"watcher-operator-controller-manager-75944c9b7-mt59j\" (UID: \"cd307815-1f04-446d-a89b-60fa6574f0db\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.644352 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9fxq\" (UniqueName: \"kubernetes.io/projected/bbeb9b29-5dc1-4cdf-94de-397cdb4a32de-kube-api-access-c9fxq\") pod \"test-operator-controller-manager-5854674fcc-jh5td\" (UID: \"bbeb9b29-5dc1-4cdf-94de-397cdb4a32de\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.658709 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87pl\" (UniqueName: \"kubernetes.io/projected/f2d005ee-450b-4029-bb3c-a5b389edc347-kube-api-access-z87pl\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.774461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv"] Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.854100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.858653 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggzg5\" (UniqueName: \"kubernetes.io/projected/0d5457f7-3a7d-4a0e-a733-33c78860c9b5-kube-api-access-ggzg5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jz9xq\" (UID: \"0d5457f7-3a7d-4a0e-a733-33c78860c9b5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.897581 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.897625 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.901401 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" event={"ID":"f4b37e55-be7c-467b-9739-e82c28f1916e","Type":"ContainerStarted","Data":"e2f8d359407c7a0fff906159e240d547b7021780d4e03f0f3d1a9515edbfe5ac"} Dec 12 04:49:04 crc kubenswrapper[4796]: I1212 04:49:04.914913 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" event={"ID":"19b30665-06c6-48e5-8ec7-3eeaf3d3e72e","Type":"ContainerStarted","Data":"b79f50ee1b97b4e5070a48fdd6d3014dceef27a4422fe07c541a876675158b58"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.013359 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7"] Dec 12 04:49:05 crc kubenswrapper[4796]: W1212 04:49:05.047745 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod035421c3_b1dd_48de_a195_04bfef7c5a0e.slice/crio-952baeb18bc36c18f7caa2ff4b2dcca48cea2864c1343baa9bb3cbef7f0d687e WatchSource:0}: Error finding container 952baeb18bc36c18f7caa2ff4b2dcca48cea2864c1343baa9bb3cbef7f0d687e: Status 404 returned error can't find the container with id 952baeb18bc36c18f7caa2ff4b2dcca48cea2864c1343baa9bb3cbef7f0d687e Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.059483 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.059556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.059650 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.059718 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:06.059700775 +0000 UTC m=+936.935717922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "metrics-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.059737 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.059790 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:06.059772248 +0000 UTC m=+936.935789445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "webhook-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.065115 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.073831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.160490 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.160680 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.160751 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert podName:301fd006-5a61-46bd-b19f-bbd1ba8f7baf nodeName:}" failed. No retries permitted until 2025-12-12 04:49:07.160733034 +0000 UTC m=+938.036750181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert") pod "infra-operator-controller-manager-78d48bff9d-lzzhj" (UID: "301fd006-5a61-46bd-b19f-bbd1ba8f7baf") : secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.205384 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.225845 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.473091 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.488087 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m"] Dec 12 04:49:05 crc kubenswrapper[4796]: W1212 04:49:05.493509 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb465c9_338c_4755_ba24_b7985e57fa06.slice/crio-92db68cba435d7f4a6f869dc01ad86fee980faabc26c5d5d4499b47d1d12b312 WatchSource:0}: Error finding container 92db68cba435d7f4a6f869dc01ad86fee980faabc26c5d5d4499b47d1d12b312: Status 404 returned error can't find the container with id 92db68cba435d7f4a6f869dc01ad86fee980faabc26c5d5d4499b47d1d12b312 Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.493965 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.583316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.583586 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: E1212 04:49:05.583631 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert podName:bb156fa4-57d0-457f-be10-e9c013f37a84 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:07.58361778 +0000 UTC m=+938.459634927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8bscf" (UID: "bb156fa4-57d0-457f-be10-e9c013f37a84") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.679199 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.684557 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.712468 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg"] Dec 12 04:49:05 crc kubenswrapper[4796]: W1212 04:49:05.728159 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a645239_185e_4bfb_b8a8_9c442ae1c379.slice/crio-1edbebc81940eedd8896607175d748505ca4a1260a7631bdb07b399e259e1f16 WatchSource:0}: Error finding container 1edbebc81940eedd8896607175d748505ca4a1260a7631bdb07b399e259e1f16: Status 404 returned error can't find the container with id 1edbebc81940eedd8896607175d748505ca4a1260a7631bdb07b399e259e1f16 Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.769122 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq"] Dec 12 04:49:05 crc kubenswrapper[4796]: W1212 04:49:05.820319 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252d73ba_87e9_492d_a9c4_2f8e4e8d66fa.slice/crio-eb21f0bf97a03726fe0117346b0831c9f74d3fded93f80200ca29aa7636cf7fe WatchSource:0}: Error finding container eb21f0bf97a03726fe0117346b0831c9f74d3fded93f80200ca29aa7636cf7fe: Status 404 returned error can't find the container with id eb21f0bf97a03726fe0117346b0831c9f74d3fded93f80200ca29aa7636cf7fe Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.863723 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6"] Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.949761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" event={"ID":"565c4c89-1b44-462b-8307-15d3d0a6cf1f","Type":"ContainerStarted","Data":"9bdb95afba31b93fa8e9a91a37e4378725b4028578955afbe6ded86a63bfc1b2"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.962613 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" event={"ID":"9fb465c9-338c-4755-ba24-b7985e57fa06","Type":"ContainerStarted","Data":"92db68cba435d7f4a6f869dc01ad86fee980faabc26c5d5d4499b47d1d12b312"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.967523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" event={"ID":"b47de1f3-3223-47bb-a707-72ee23490049","Type":"ContainerStarted","Data":"9fa9f14c023bff18b75fb20b993c4fd0a3de2a5aef4a602839d12184678b534a"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.969088 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" event={"ID":"252d73ba-87e9-492d-a9c4-2f8e4e8d66fa","Type":"ContainerStarted","Data":"eb21f0bf97a03726fe0117346b0831c9f74d3fded93f80200ca29aa7636cf7fe"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.970208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" event={"ID":"43ac4ab3-1f18-4b18-8a83-1561837988eb","Type":"ContainerStarted","Data":"9952961bcd62636c63f88f78de63cf418b345bc12c1c3663f1de8c3d77c69941"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.971474 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" event={"ID":"c3102315-cf09-47e4-b1b2-4721b38ac5b8","Type":"ContainerStarted","Data":"3f540806c7dace300e34fb65c0f4e9f3e514980b2da148ebdb329fbed9f8a35e"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.972543 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" event={"ID":"3092bc98-4221-47ff-bae0-06efcfa85522","Type":"ContainerStarted","Data":"a6639cf4388397c8cc75a93156c91b0eaa955f58ec620713be15cc98bf0ab1c6"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.973439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" event={"ID":"6a645239-185e-4bfb-b8a8-9c442ae1c379","Type":"ContainerStarted","Data":"1edbebc81940eedd8896607175d748505ca4a1260a7631bdb07b399e259e1f16"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.974220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" event={"ID":"a0340c55-5a39-4841-a602-694ef484e3ec","Type":"ContainerStarted","Data":"20fca468cc231d0704293afc2111f10524dc73a3908c1d58e768f7157a2312f8"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.975336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" event={"ID":"8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73","Type":"ContainerStarted","Data":"fa511b8df1ed87b45b55bc53c1e19596520105bed1a7c4cee703502fde580c21"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.976518 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" event={"ID":"22df48e7-88f5-43df-bdce-9116599bea1b","Type":"ContainerStarted","Data":"001c018bff0d0ebd891d334fd92201a50970ea3c00102b90a23f2effa54628cf"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.983565 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" event={"ID":"035421c3-b1dd-48de-a195-04bfef7c5a0e","Type":"ContainerStarted","Data":"952baeb18bc36c18f7caa2ff4b2dcca48cea2864c1343baa9bb3cbef7f0d687e"} Dec 12 04:49:05 crc kubenswrapper[4796]: I1212 04:49:05.986608 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" event={"ID":"38f86aeb-2024-40b1-8c60-25c2c78ef7ac","Type":"ContainerStarted","Data":"1e63e196ac6773bf23a8d4b505394b2bad96b2a8e6511c4953ac754a62b614e2"} Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.072586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" event={"ID":"c14d829a-f63e-404c-b117-65c0e15280e8","Type":"ContainerStarted","Data":"bd6b7467772ca54d02377f8e9ddfe5c25cd646bbb49cf887efa3e28285da28ad"} Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.089063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.089350 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.089702 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.089804 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:08.089780894 +0000 UTC m=+938.965798071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "metrics-server-cert" not found Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.089711 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.090025 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:08.090006441 +0000 UTC m=+938.966023668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "webhook-server-cert" not found Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.117405 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq"] Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.125621 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws"] Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.135285 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j"] Dec 12 04:49:06 crc kubenswrapper[4796]: I1212 04:49:06.166748 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-jh5td"] Dec 12 04:49:06 crc kubenswrapper[4796]: W1212 04:49:06.203528 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f217e33_5880_42b4_931f_8a4633195ffc.slice/crio-1069edfc96b5092f9ba32f462a9eb9393978204ef79c0edcb4e4a22b926fa5f5 WatchSource:0}: Error finding container 1069edfc96b5092f9ba32f462a9eb9393978204ef79c0edcb4e4a22b926fa5f5: Status 404 returned error can't find the container with id 1069edfc96b5092f9ba32f462a9eb9393978204ef79c0edcb4e4a22b926fa5f5 Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.218997 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqgxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-8k4ws_openstack-operators(7f217e33-5880-42b4-931f-8a4633195ffc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.221583 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqgxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-8k4ws_openstack-operators(7f217e33-5880-42b4-931f-8a4633195ffc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.224964 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" podUID="7f217e33-5880-42b4-931f-8a4633195ffc" Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.232571 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9fxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-jh5td_openstack-operators(bbeb9b29-5dc1-4cdf-94de-397cdb4a32de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.246449 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9fxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-jh5td_openstack-operators(bbeb9b29-5dc1-4cdf-94de-397cdb4a32de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 12 04:49:06 crc kubenswrapper[4796]: E1212 04:49:06.250516 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" podUID="bbeb9b29-5dc1-4cdf-94de-397cdb4a32de" Dec 12 04:49:07 crc kubenswrapper[4796]: I1212 04:49:07.116345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" event={"ID":"0d5457f7-3a7d-4a0e-a733-33c78860c9b5","Type":"ContainerStarted","Data":"3ac270b505b4df2a1d953cf406c7e5c9629c49ae21af4c586b88268d28054910"} Dec 12 04:49:07 crc kubenswrapper[4796]: I1212 04:49:07.149646 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" event={"ID":"bbeb9b29-5dc1-4cdf-94de-397cdb4a32de","Type":"ContainerStarted","Data":"898d0884eed84f1a6c8b08a5836250f9743518a17e5299f0dd8d3616ca998530"} Dec 12 04:49:07 crc kubenswrapper[4796]: E1212 04:49:07.159605 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" podUID="bbeb9b29-5dc1-4cdf-94de-397cdb4a32de" Dec 12 04:49:07 crc kubenswrapper[4796]: I1212 04:49:07.168140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" event={"ID":"7f217e33-5880-42b4-931f-8a4633195ffc","Type":"ContainerStarted","Data":"1069edfc96b5092f9ba32f462a9eb9393978204ef79c0edcb4e4a22b926fa5f5"} Dec 12 04:49:07 crc kubenswrapper[4796]: I1212 04:49:07.172347 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" event={"ID":"cd307815-1f04-446d-a89b-60fa6574f0db","Type":"ContainerStarted","Data":"2dc85b25a982a2bcafe2b5ac8ff1c827a84741aeae8a53c4fab6ccac7c420fbc"} Dec 12 04:49:07 crc kubenswrapper[4796]: E1212 04:49:07.173229 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" podUID="7f217e33-5880-42b4-931f-8a4633195ffc" Dec 12 04:49:07 crc kubenswrapper[4796]: I1212 04:49:07.231589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:07 crc kubenswrapper[4796]: E1212 04:49:07.231857 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:07 crc kubenswrapper[4796]: E1212 04:49:07.232090 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert podName:301fd006-5a61-46bd-b19f-bbd1ba8f7baf nodeName:}" failed. No retries permitted until 2025-12-12 04:49:11.232054704 +0000 UTC m=+942.108071901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert") pod "infra-operator-controller-manager-78d48bff9d-lzzhj" (UID: "301fd006-5a61-46bd-b19f-bbd1ba8f7baf") : secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:07 crc kubenswrapper[4796]: I1212 04:49:07.586708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:07 crc kubenswrapper[4796]: E1212 04:49:07.586889 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:07 crc kubenswrapper[4796]: E1212 04:49:07.586936 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert podName:bb156fa4-57d0-457f-be10-e9c013f37a84 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:11.586921603 +0000 UTC m=+942.462938740 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8bscf" (UID: "bb156fa4-57d0-457f-be10-e9c013f37a84") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:08 crc kubenswrapper[4796]: I1212 04:49:08.102547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:08 crc kubenswrapper[4796]: I1212 04:49:08.102660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:08 crc kubenswrapper[4796]: E1212 04:49:08.102835 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 04:49:08 crc kubenswrapper[4796]: E1212 04:49:08.102892 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:12.102874267 +0000 UTC m=+942.978891414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "webhook-server-cert" not found Dec 12 04:49:08 crc kubenswrapper[4796]: E1212 04:49:08.103267 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 04:49:08 crc kubenswrapper[4796]: E1212 04:49:08.103359 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:12.103346862 +0000 UTC m=+942.979364009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "metrics-server-cert" not found Dec 12 04:49:08 crc kubenswrapper[4796]: E1212 04:49:08.244140 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" podUID="7f217e33-5880-42b4-931f-8a4633195ffc" Dec 12 04:49:08 crc kubenswrapper[4796]: E1212 04:49:08.244254 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" podUID="bbeb9b29-5dc1-4cdf-94de-397cdb4a32de" Dec 12 04:49:11 crc kubenswrapper[4796]: I1212 04:49:11.269960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:11 crc kubenswrapper[4796]: E1212 04:49:11.270123 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:11 crc kubenswrapper[4796]: E1212 04:49:11.270472 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert podName:301fd006-5a61-46bd-b19f-bbd1ba8f7baf nodeName:}" failed. No retries permitted until 2025-12-12 04:49:19.270453484 +0000 UTC m=+950.146470631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert") pod "infra-operator-controller-manager-78d48bff9d-lzzhj" (UID: "301fd006-5a61-46bd-b19f-bbd1ba8f7baf") : secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:11 crc kubenswrapper[4796]: I1212 04:49:11.683530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:11 crc kubenswrapper[4796]: E1212 04:49:11.683742 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:11 crc kubenswrapper[4796]: E1212 04:49:11.683884 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert podName:bb156fa4-57d0-457f-be10-e9c013f37a84 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:19.68384148 +0000 UTC m=+950.559858627 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8bscf" (UID: "bb156fa4-57d0-457f-be10-e9c013f37a84") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:12 crc kubenswrapper[4796]: I1212 04:49:12.188888 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:12 crc kubenswrapper[4796]: I1212 04:49:12.189071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:12 crc kubenswrapper[4796]: E1212 04:49:12.189166 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 04:49:12 crc kubenswrapper[4796]: E1212 04:49:12.189218 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 04:49:12 crc kubenswrapper[4796]: E1212 04:49:12.189251 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:20.18922985 +0000 UTC m=+951.065247087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "webhook-server-cert" not found Dec 12 04:49:12 crc kubenswrapper[4796]: E1212 04:49:12.189293 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:20.18925974 +0000 UTC m=+951.065276957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "metrics-server-cert" not found Dec 12 04:49:19 crc kubenswrapper[4796]: I1212 04:49:19.300406 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:19 crc kubenswrapper[4796]: E1212 04:49:19.300604 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:19 crc kubenswrapper[4796]: E1212 04:49:19.301037 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert podName:301fd006-5a61-46bd-b19f-bbd1ba8f7baf nodeName:}" failed. No retries permitted until 2025-12-12 04:49:35.301020374 +0000 UTC m=+966.177037521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert") pod "infra-operator-controller-manager-78d48bff9d-lzzhj" (UID: "301fd006-5a61-46bd-b19f-bbd1ba8f7baf") : secret "infra-operator-webhook-server-cert" not found Dec 12 04:49:19 crc kubenswrapper[4796]: I1212 04:49:19.705969 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:19 crc kubenswrapper[4796]: E1212 04:49:19.706165 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:19 crc kubenswrapper[4796]: E1212 04:49:19.706228 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert podName:bb156fa4-57d0-457f-be10-e9c013f37a84 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:35.706205331 +0000 UTC m=+966.582222498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8bscf" (UID: "bb156fa4-57d0-457f-be10-e9c013f37a84") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 12 04:49:20 crc kubenswrapper[4796]: I1212 04:49:20.211505 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:20 crc kubenswrapper[4796]: I1212 04:49:20.211713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:20 crc kubenswrapper[4796]: E1212 04:49:20.211712 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 12 04:49:20 crc kubenswrapper[4796]: E1212 04:49:20.211756 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 12 04:49:20 crc kubenswrapper[4796]: E1212 04:49:20.211925 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:36.211900531 +0000 UTC m=+967.087917688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "webhook-server-cert" not found Dec 12 04:49:20 crc kubenswrapper[4796]: E1212 04:49:20.212083 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs podName:f2d005ee-450b-4029-bb3c-a5b389edc347 nodeName:}" failed. No retries permitted until 2025-12-12 04:49:36.212033625 +0000 UTC m=+967.088050832 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs") pod "openstack-operator-controller-manager-7775c45dbc-9fh7g" (UID: "f2d005ee-450b-4029-bb3c-a5b389edc347") : secret "metrics-server-cert" not found Dec 12 04:49:21 crc kubenswrapper[4796]: E1212 04:49:21.764413 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 12 04:49:21 crc kubenswrapper[4796]: E1212 04:49:21.764983 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7b7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-wdgv6_openstack-operators(c3102315-cf09-47e4-b1b2-4721b38ac5b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:22 crc kubenswrapper[4796]: E1212 04:49:22.348559 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 12 04:49:22 crc kubenswrapper[4796]: E1212 04:49:22.348730 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7bf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-5v7h7_openstack-operators(035421c3-b1dd-48de-a195-04bfef7c5a0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:25 crc kubenswrapper[4796]: E1212 04:49:25.745193 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 12 04:49:25 crc kubenswrapper[4796]: E1212 04:49:25.746075 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wncsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-n6m6j_openstack-operators(a0340c55-5a39-4841-a602-694ef484e3ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:26 crc kubenswrapper[4796]: E1212 04:49:26.265891 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 12 04:49:26 crc kubenswrapper[4796]: E1212 04:49:26.266337 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnr2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-2dggq_openstack-operators(8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:28 crc kubenswrapper[4796]: E1212 04:49:28.402644 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 12 04:49:28 crc kubenswrapper[4796]: E1212 04:49:28.402880 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x67z2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-qzrqh_openstack-operators(565c4c89-1b44-462b-8307-15d3d0a6cf1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:28 crc kubenswrapper[4796]: E1212 04:49:28.858041 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 12 04:49:28 crc kubenswrapper[4796]: E1212 04:49:28.858243 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtzrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-27f9h_openstack-operators(19b30665-06c6-48e5-8ec7-3eeaf3d3e72e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:30 crc kubenswrapper[4796]: E1212 04:49:30.743380 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 12 04:49:30 crc kubenswrapper[4796]: E1212 04:49:30.744151 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxckw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-kgq7g_openstack-operators(6a645239-185e-4bfb-b8a8-9c442ae1c379): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:32 crc kubenswrapper[4796]: E1212 04:49:32.938303 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 12 04:49:32 crc kubenswrapper[4796]: E1212 04:49:32.938708 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qpjfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-ws54v_openstack-operators(43ac4ab3-1f18-4b18-8a83-1561837988eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:33 crc kubenswrapper[4796]: E1212 04:49:33.356105 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 12 04:49:33 crc kubenswrapper[4796]: E1212 04:49:33.356572 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx4kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-rpkcq_openstack-operators(252d73ba-87e9-492d-a9c4-2f8e4e8d66fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:33 crc kubenswrapper[4796]: E1212 04:49:33.816674 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 12 04:49:33 crc kubenswrapper[4796]: E1212 04:49:33.816874 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nplw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-47x2m_openstack-operators(9fb465c9-338c-4755-ba24-b7985e57fa06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:34 crc kubenswrapper[4796]: E1212 04:49:34.300571 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f" Dec 12 04:49:34 crc kubenswrapper[4796]: E1212 04:49:34.300746 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p77p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-gvgzg_openstack-operators(38f86aeb-2024-40b1-8c60-25c2c78ef7ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.325805 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.336066 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/301fd006-5a61-46bd-b19f-bbd1ba8f7baf-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lzzhj\" (UID: \"301fd006-5a61-46bd-b19f-bbd1ba8f7baf\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.444876 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t922h" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.453586 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.732081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.737841 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb156fa4-57d0-457f-be10-e9c013f37a84-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8bscf\" (UID: \"bb156fa4-57d0-457f-be10-e9c013f37a84\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.900473 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d6bbs" Dec 12 04:49:35 crc kubenswrapper[4796]: I1212 04:49:35.908900 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:36 crc kubenswrapper[4796]: I1212 04:49:36.238782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:36 crc kubenswrapper[4796]: I1212 04:49:36.238878 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:36 crc kubenswrapper[4796]: I1212 04:49:36.244965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-webhook-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:36 crc kubenswrapper[4796]: I1212 04:49:36.251356 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2d005ee-450b-4029-bb3c-a5b389edc347-metrics-certs\") pod \"openstack-operator-controller-manager-7775c45dbc-9fh7g\" (UID: \"f2d005ee-450b-4029-bb3c-a5b389edc347\") " pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:36 crc kubenswrapper[4796]: I1212 04:49:36.454417 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-m6l4g" Dec 12 04:49:36 crc kubenswrapper[4796]: I1212 04:49:36.462978 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:37 crc kubenswrapper[4796]: E1212 04:49:37.327320 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027" Dec 12 04:49:37 crc kubenswrapper[4796]: E1212 04:49:37.327554 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vjxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-mdmcv_openstack-operators(22df48e7-88f5-43df-bdce-9116599bea1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:37 crc kubenswrapper[4796]: E1212 04:49:37.814754 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 12 04:49:37 crc kubenswrapper[4796]: E1212 04:49:37.815150 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggk47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-jqd2f_openstack-operators(3092bc98-4221-47ff-bae0-06efcfa85522): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:39 crc kubenswrapper[4796]: E1212 04:49:39.793603 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 12 04:49:39 crc kubenswrapper[4796]: E1212 04:49:39.795208 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jhsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-w5fjz_openstack-operators(c14d829a-f63e-404c-b117-65c0e15280e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:40 crc kubenswrapper[4796]: E1212 04:49:40.294749 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 12 04:49:40 crc kubenswrapper[4796]: E1212 04:49:40.294900 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9fxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-jh5td_openstack-operators(bbeb9b29-5dc1-4cdf-94de-397cdb4a32de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:40 crc kubenswrapper[4796]: E1212 04:49:40.893005 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 12 04:49:40 crc kubenswrapper[4796]: E1212 04:49:40.893646 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqgxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-8k4ws_openstack-operators(7f217e33-5880-42b4-931f-8a4633195ffc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:43 crc kubenswrapper[4796]: E1212 04:49:43.228421 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 12 04:49:43 crc kubenswrapper[4796]: E1212 04:49:43.229268 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggzg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jz9xq_openstack-operators(0d5457f7-3a7d-4a0e-a733-33c78860c9b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:43 crc kubenswrapper[4796]: E1212 04:49:43.230540 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" podUID="0d5457f7-3a7d-4a0e-a733-33c78860c9b5" Dec 12 04:49:43 crc kubenswrapper[4796]: E1212 04:49:43.544964 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" podUID="0d5457f7-3a7d-4a0e-a733-33c78860c9b5" Dec 12 04:49:43 crc kubenswrapper[4796]: E1212 04:49:43.807032 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 12 04:49:43 crc kubenswrapper[4796]: E1212 04:49:43.807255 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2wws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-6dsjv_openstack-operators(b47de1f3-3223-47bb-a707-72ee23490049): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:49:44 crc kubenswrapper[4796]: I1212 04:49:44.429641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf"] Dec 12 04:49:44 crc kubenswrapper[4796]: I1212 04:49:44.485999 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj"] Dec 12 04:49:44 crc kubenswrapper[4796]: W1212 04:49:44.509501 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb156fa4_57d0_457f_be10_e9c013f37a84.slice/crio-2941417d833aa00273c21b226a359a72f305bc633b83386ac9d3c9e3316c8a2a WatchSource:0}: Error finding container 2941417d833aa00273c21b226a359a72f305bc633b83386ac9d3c9e3316c8a2a: Status 404 returned error can't find the container with id 2941417d833aa00273c21b226a359a72f305bc633b83386ac9d3c9e3316c8a2a Dec 12 04:49:44 crc kubenswrapper[4796]: I1212 04:49:44.516587 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g"] Dec 12 04:49:44 crc kubenswrapper[4796]: I1212 04:49:44.547604 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" event={"ID":"bb156fa4-57d0-457f-be10-e9c013f37a84","Type":"ContainerStarted","Data":"2941417d833aa00273c21b226a359a72f305bc633b83386ac9d3c9e3316c8a2a"} Dec 12 04:49:44 crc kubenswrapper[4796]: I1212 04:49:44.548865 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" event={"ID":"301fd006-5a61-46bd-b19f-bbd1ba8f7baf","Type":"ContainerStarted","Data":"48ebc44f3eebf9e2d74621d1d83a9929199327a05680f9553b157a3a79fafc49"} Dec 12 04:49:44 crc kubenswrapper[4796]: W1212 04:49:44.578766 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2d005ee_450b_4029_bb3c_a5b389edc347.slice/crio-d48a73021b0b33a42783e57fc9d32e3fd5384430b5fe2385eb7e95fe414f325e WatchSource:0}: Error finding container d48a73021b0b33a42783e57fc9d32e3fd5384430b5fe2385eb7e95fe414f325e: Status 404 returned error can't find the container with id d48a73021b0b33a42783e57fc9d32e3fd5384430b5fe2385eb7e95fe414f325e Dec 12 04:49:45 crc kubenswrapper[4796]: I1212 04:49:45.557501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" event={"ID":"cd307815-1f04-446d-a89b-60fa6574f0db","Type":"ContainerStarted","Data":"1f72b64c20d8fd3f5ca5b296591bbc122749abf2ad161086b3de90eba85ac691"} Dec 12 04:49:45 crc kubenswrapper[4796]: I1212 04:49:45.560522 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" event={"ID":"f4b37e55-be7c-467b-9739-e82c28f1916e","Type":"ContainerStarted","Data":"368c055975034520cd282a9149207f69acc69b863ee781015778c4bfbc3ed6dd"} Dec 12 04:49:45 crc kubenswrapper[4796]: I1212 04:49:45.561822 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" event={"ID":"f2d005ee-450b-4029-bb3c-a5b389edc347","Type":"ContainerStarted","Data":"d48a73021b0b33a42783e57fc9d32e3fd5384430b5fe2385eb7e95fe414f325e"} Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.095458 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" podUID="035421c3-b1dd-48de-a195-04bfef7c5a0e" Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.160062 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" podUID="19b30665-06c6-48e5-8ec7-3eeaf3d3e72e" Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.338815 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" podUID="a0340c55-5a39-4841-a602-694ef484e3ec" Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.354868 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" podUID="565c4c89-1b44-462b-8307-15d3d0a6cf1f" Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.388338 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" podUID="252d73ba-87e9-492d-a9c4-2f8e4e8d66fa" Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.511342 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" podUID="9fb465c9-338c-4755-ba24-b7985e57fa06" Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.568211 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" event={"ID":"a0340c55-5a39-4841-a602-694ef484e3ec","Type":"ContainerStarted","Data":"5540a52ec75e3a7b888b7d6402ee1dc50cd719dd749b849bf41f74a25101312f"} Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.574683 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" event={"ID":"252d73ba-87e9-492d-a9c4-2f8e4e8d66fa","Type":"ContainerStarted","Data":"6b82d63c1601ae74416207ef81bb67b3fc5a99ee73edee64f484b92a9ac31046"} Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.583466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" event={"ID":"565c4c89-1b44-462b-8307-15d3d0a6cf1f","Type":"ContainerStarted","Data":"a0bbecc02bffb122b6f628da24e78306fa84c292276dd322e85c7911ac95f47e"} Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.598389 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" event={"ID":"9fb465c9-338c-4755-ba24-b7985e57fa06","Type":"ContainerStarted","Data":"bd8d1ef0fa6610a3ec27829270f41ee0d12cdaf35716b69af10f93e271609eed"} Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.608043 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" event={"ID":"19b30665-06c6-48e5-8ec7-3eeaf3d3e72e","Type":"ContainerStarted","Data":"71341d5fed6925bd6808ad150358b04e7d9a0a7dcea423585bf7fa1a39eb5e27"} Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.624415 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" podUID="6a645239-185e-4bfb-b8a8-9c442ae1c379" Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.624624 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" podUID="7f217e33-5880-42b4-931f-8a4633195ffc" Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.627094 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" event={"ID":"f2d005ee-450b-4029-bb3c-a5b389edc347","Type":"ContainerStarted","Data":"56ad2bad94d46a6dcbd9c947a40f6bf703241e2bfd897854651c3b3048bdef9a"} Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.627876 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.637121 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" event={"ID":"035421c3-b1dd-48de-a195-04bfef7c5a0e","Type":"ContainerStarted","Data":"80cd638c8a3fbe3ef46e5e59a969d80084294cc9a6062fd18d5901fb600d5ee9"} Dec 12 04:49:46 crc kubenswrapper[4796]: E1212 04:49:46.730059 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" podUID="c3102315-cf09-47e4-b1b2-4721b38ac5b8" Dec 12 04:49:46 crc kubenswrapper[4796]: I1212 04:49:46.756512 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" podStartSLOduration=43.756491694 podStartE2EDuration="43.756491694s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:49:46.711255286 +0000 UTC m=+977.587272433" watchObservedRunningTime="2025-12-12 04:49:46.756491694 +0000 UTC m=+977.632508841" Dec 12 04:49:47 crc kubenswrapper[4796]: E1212 04:49:47.042804 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" podUID="38f86aeb-2024-40b1-8c60-25c2c78ef7ac" Dec 12 04:49:47 crc kubenswrapper[4796]: I1212 04:49:47.645117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" event={"ID":"6a645239-185e-4bfb-b8a8-9c442ae1c379","Type":"ContainerStarted","Data":"1fc45fb3aa1fd089f340782c37c0d9ad70382b2f0638d702fd70cc29f2754c3e"} Dec 12 04:49:47 crc kubenswrapper[4796]: I1212 04:49:47.647627 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" event={"ID":"7f217e33-5880-42b4-931f-8a4633195ffc","Type":"ContainerStarted","Data":"d5153c4707282824f0228cfb87725ec4faab409e52cbcf26dbf6de2f3f2dc9db"} Dec 12 04:49:47 crc kubenswrapper[4796]: I1212 04:49:47.648920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" event={"ID":"38f86aeb-2024-40b1-8c60-25c2c78ef7ac","Type":"ContainerStarted","Data":"9328e52d7164db53cd361f5183c5af2d6d1b9cd470ea4fd94c06697cec76f19d"} Dec 12 04:49:47 crc kubenswrapper[4796]: E1212 04:49:47.649368 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" podUID="7f217e33-5880-42b4-931f-8a4633195ffc" Dec 12 04:49:47 crc kubenswrapper[4796]: I1212 04:49:47.651505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" event={"ID":"c3102315-cf09-47e4-b1b2-4721b38ac5b8","Type":"ContainerStarted","Data":"9323a7b4911503c023a87ba30f6aa824114adabd598b8addea2ee7cf7bae7ed2"} Dec 12 04:49:47 crc kubenswrapper[4796]: E1212 04:49:47.894618 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" podUID="c14d829a-f63e-404c-b117-65c0e15280e8" Dec 12 04:49:47 crc kubenswrapper[4796]: E1212 04:49:47.906112 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" podUID="b47de1f3-3223-47bb-a707-72ee23490049" Dec 12 04:49:48 crc kubenswrapper[4796]: E1212 04:49:48.188686 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" podUID="8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73" Dec 12 04:49:48 crc kubenswrapper[4796]: E1212 04:49:48.248643 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" podUID="22df48e7-88f5-43df-bdce-9116599bea1b" Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.659311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" event={"ID":"b47de1f3-3223-47bb-a707-72ee23490049","Type":"ContainerStarted","Data":"2cba311cf7b0dd65286906dd202331fa828e3f7c4123758bec0ad4fb6d3fb7f0"} Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.661344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" event={"ID":"8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73","Type":"ContainerStarted","Data":"1813c939091760f7ca3d0cc6c9db58155191566c684eb8fd20ebb01feae206f9"} Dec 12 04:49:48 crc kubenswrapper[4796]: E1212 04:49:48.661816 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" podUID="b47de1f3-3223-47bb-a707-72ee23490049" Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.665206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" event={"ID":"252d73ba-87e9-492d-a9c4-2f8e4e8d66fa","Type":"ContainerStarted","Data":"650269cb3aa28e84d46142abe34c8665eeaa5a9a27d660fa781a007bca47e1cb"} Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.665754 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.668156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" event={"ID":"22df48e7-88f5-43df-bdce-9116599bea1b","Type":"ContainerStarted","Data":"a45f7e2db499cd2133dfce2441aaece0af64b0159a7f4f6382c8228f48420bb5"} Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.671501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" event={"ID":"a0340c55-5a39-4841-a602-694ef484e3ec","Type":"ContainerStarted","Data":"7abc5194f99d09791f745b11cf83720453784009f13a2c9e6f18e5fdf0813dec"} Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.672090 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.689199 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" event={"ID":"c14d829a-f63e-404c-b117-65c0e15280e8","Type":"ContainerStarted","Data":"64bd57ba44be1233444c4faf6ac7983ff9b1f96c4fffcfc73d9802ab23a0f37c"} Dec 12 04:49:48 crc kubenswrapper[4796]: E1212 04:49:48.702728 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" podUID="c14d829a-f63e-404c-b117-65c0e15280e8" Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.741621 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" podStartSLOduration=3.625035886 podStartE2EDuration="45.74160218s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.832592368 +0000 UTC m=+936.708609515" lastFinishedPulling="2025-12-12 04:49:47.949158642 +0000 UTC m=+978.825175809" observedRunningTime="2025-12-12 04:49:48.724050146 +0000 UTC m=+979.600067293" watchObservedRunningTime="2025-12-12 04:49:48.74160218 +0000 UTC m=+979.617619327" Dec 12 04:49:48 crc kubenswrapper[4796]: I1212 04:49:48.759030 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" podStartSLOduration=4.118164048 podStartE2EDuration="45.75901562s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.509385107 +0000 UTC m=+936.385402254" lastFinishedPulling="2025-12-12 04:49:47.150236679 +0000 UTC m=+978.026253826" observedRunningTime="2025-12-12 04:49:48.754464256 +0000 UTC m=+979.630481413" watchObservedRunningTime="2025-12-12 04:49:48.75901562 +0000 UTC m=+979.635032767" Dec 12 04:49:49 crc kubenswrapper[4796]: E1212 04:49:49.706304 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" podUID="b47de1f3-3223-47bb-a707-72ee23490049" Dec 12 04:49:49 crc kubenswrapper[4796]: E1212 04:49:49.706770 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" podUID="c14d829a-f63e-404c-b117-65c0e15280e8" Dec 12 04:49:50 crc kubenswrapper[4796]: E1212 04:49:50.695511 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" podUID="bbeb9b29-5dc1-4cdf-94de-397cdb4a32de" Dec 12 04:49:50 crc kubenswrapper[4796]: E1212 04:49:50.707820 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" podUID="43ac4ab3-1f18-4b18-8a83-1561837988eb" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.716553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" event={"ID":"565c4c89-1b44-462b-8307-15d3d0a6cf1f","Type":"ContainerStarted","Data":"a8c8ca3ff796e81ea19f159e3b68721de0f96a63c95282d39308e3e319a47fba"} Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.716831 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.731741 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" event={"ID":"6a645239-185e-4bfb-b8a8-9c442ae1c379","Type":"ContainerStarted","Data":"f59f464ba279be7ea3950341ba8cb00ed98b80368c1cec2305e8b9039b972fee"} Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.731869 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.742324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" event={"ID":"035421c3-b1dd-48de-a195-04bfef7c5a0e","Type":"ContainerStarted","Data":"b40ce4ecf6f7a8e34027d5319a818c779deae15a3a035ef904e1acec4eb496db"} Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.742949 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.754007 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" podStartSLOduration=3.251366773 podStartE2EDuration="47.753992788s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.140729763 +0000 UTC m=+936.016746910" lastFinishedPulling="2025-12-12 04:49:49.643355778 +0000 UTC m=+980.519372925" observedRunningTime="2025-12-12 04:49:50.749397973 +0000 UTC m=+981.625415120" watchObservedRunningTime="2025-12-12 04:49:50.753992788 +0000 UTC m=+981.630009935" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.757392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" event={"ID":"bbeb9b29-5dc1-4cdf-94de-397cdb4a32de","Type":"ContainerStarted","Data":"70b0025a24a9c2b957da3a4a7a8e9fe2aa1a9b853c08e5ee0ed616df45d03123"} Dec 12 04:49:50 crc kubenswrapper[4796]: E1212 04:49:50.759679 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" podUID="bbeb9b29-5dc1-4cdf-94de-397cdb4a32de" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.765271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" event={"ID":"c3102315-cf09-47e4-b1b2-4721b38ac5b8","Type":"ContainerStarted","Data":"1f0719219a4a18aaac775c573e56a2406bb577e1d3da7ca4247907726e3df9e7"} Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.765374 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.777449 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" podStartSLOduration=3.191846924 podStartE2EDuration="47.777431058s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.055768091 +0000 UTC m=+935.931785238" lastFinishedPulling="2025-12-12 04:49:49.641352215 +0000 UTC m=+980.517369372" observedRunningTime="2025-12-12 04:49:50.776598431 +0000 UTC m=+981.652615588" watchObservedRunningTime="2025-12-12 04:49:50.777431058 +0000 UTC m=+981.653448205" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.809891 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" podStartSLOduration=3.415001898 podStartE2EDuration="47.809874201s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.825169884 +0000 UTC m=+936.701187041" lastFinishedPulling="2025-12-12 04:49:50.220042197 +0000 UTC m=+981.096059344" observedRunningTime="2025-12-12 04:49:50.806495664 +0000 UTC m=+981.682512811" watchObservedRunningTime="2025-12-12 04:49:50.809874201 +0000 UTC m=+981.685891348" Dec 12 04:49:50 crc kubenswrapper[4796]: I1212 04:49:50.865466 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" podStartSLOduration=3.535498279 podStartE2EDuration="47.865451885s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.890917548 +0000 UTC m=+936.766934695" lastFinishedPulling="2025-12-12 04:49:50.220871154 +0000 UTC m=+981.096888301" observedRunningTime="2025-12-12 04:49:50.86084476 +0000 UTC m=+981.736861907" watchObservedRunningTime="2025-12-12 04:49:50.865451885 +0000 UTC m=+981.741469032" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.793249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" event={"ID":"bb156fa4-57d0-457f-be10-e9c013f37a84","Type":"ContainerStarted","Data":"781c40f360144b50d00c41892832f15cd3388ac494a5af28fdc10310f07a6e16"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.802198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" event={"ID":"301fd006-5a61-46bd-b19f-bbd1ba8f7baf","Type":"ContainerStarted","Data":"a2db6351b420d93a8361493b56d3faf736c7ffbb6fb5aacea91cf5181470aa27"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.826826 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" event={"ID":"43ac4ab3-1f18-4b18-8a83-1561837988eb","Type":"ContainerStarted","Data":"367c94f38ccf28f63ede6324b2a5bd02027eb7882194f5a72270364147c5d3a4"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.845641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" event={"ID":"9fb465c9-338c-4755-ba24-b7985e57fa06","Type":"ContainerStarted","Data":"a7b4996ae1cc902851d9b4a5fcba7a330fc2139d52ea5e43fccaa626c9346c92"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.846539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.850880 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" event={"ID":"f4b37e55-be7c-467b-9739-e82c28f1916e","Type":"ContainerStarted","Data":"3f3b9a48c98d5dddc188498ea9589e877c0ba645c2139f4736563c4d13a48a8c"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.851353 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.857150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" event={"ID":"19b30665-06c6-48e5-8ec7-3eeaf3d3e72e","Type":"ContainerStarted","Data":"061bcdd3468f7dabb5a6cc5d3b6467822b18bc90cb4f0bf9df1e62394534807a"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.857746 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.862486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" event={"ID":"cd307815-1f04-446d-a89b-60fa6574f0db","Type":"ContainerStarted","Data":"385a993501cfb0756dd77574d9adf7b8756e723f4e91e88b642f82a8034157ed"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.863439 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.867603 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.871038 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.871876 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" podStartSLOduration=4.728489698 podStartE2EDuration="48.871867386s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.497940696 +0000 UTC m=+936.373957843" lastFinishedPulling="2025-12-12 04:49:49.641318384 +0000 UTC m=+980.517335531" observedRunningTime="2025-12-12 04:49:51.86756236 +0000 UTC m=+982.743579517" watchObservedRunningTime="2025-12-12 04:49:51.871867386 +0000 UTC m=+982.747884533" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.892586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" event={"ID":"8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73","Type":"ContainerStarted","Data":"f888a8c602e7c5aafb1cee247e103ce877da1a7391a5bd9eb07e1e795e33cebb"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.892796 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.894369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" event={"ID":"3092bc98-4221-47ff-bae0-06efcfa85522","Type":"ContainerStarted","Data":"e05dbe8fec28f55f46cd1557b4735e3b33aebb16a2540e5071d9a95be184485d"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.896660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" event={"ID":"38f86aeb-2024-40b1-8c60-25c2c78ef7ac","Type":"ContainerStarted","Data":"24e1519bd46f85b498e315d34351ee4dd003ba55bff44052aa3e74d21cf343fe"} Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.897253 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.901146 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" podStartSLOduration=3.783783143 podStartE2EDuration="48.901135669s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:04.52497916 +0000 UTC m=+935.400996307" lastFinishedPulling="2025-12-12 04:49:49.642331686 +0000 UTC m=+980.518348833" observedRunningTime="2025-12-12 04:49:51.897586058 +0000 UTC m=+982.773603205" watchObservedRunningTime="2025-12-12 04:49:51.901135669 +0000 UTC m=+982.777152816" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.907780 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" event={"ID":"22df48e7-88f5-43df-bdce-9116599bea1b","Type":"ContainerStarted","Data":"e83d5b7bfc825aa0111ab43333dc8663830b05f9a4f3d1526582cc6ca385dc3a"} Dec 12 04:49:51 crc kubenswrapper[4796]: E1212 04:49:51.909945 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" podUID="3092bc98-4221-47ff-bae0-06efcfa85522" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.921517 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-mt59j" podStartSLOduration=4.874760535 podStartE2EDuration="48.921498302s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:06.176087708 +0000 UTC m=+937.052104855" lastFinishedPulling="2025-12-12 04:49:50.222825475 +0000 UTC m=+981.098842622" observedRunningTime="2025-12-12 04:49:51.916905748 +0000 UTC m=+982.792922895" watchObservedRunningTime="2025-12-12 04:49:51.921498302 +0000 UTC m=+982.797515449" Dec 12 04:49:51 crc kubenswrapper[4796]: I1212 04:49:51.963886 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-f9h97" podStartSLOduration=3.031993138 podStartE2EDuration="48.963868279s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:04.289745536 +0000 UTC m=+935.165762683" lastFinishedPulling="2025-12-12 04:49:50.221620677 +0000 UTC m=+981.097637824" observedRunningTime="2025-12-12 04:49:51.958654265 +0000 UTC m=+982.834671402" watchObservedRunningTime="2025-12-12 04:49:51.963868279 +0000 UTC m=+982.839885426" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.037026 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" podStartSLOduration=4.682602571 podStartE2EDuration="49.037009458s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.86785679 +0000 UTC m=+936.743873937" lastFinishedPulling="2025-12-12 04:49:50.222263677 +0000 UTC m=+981.098280824" observedRunningTime="2025-12-12 04:49:52.036336446 +0000 UTC m=+982.912353593" watchObservedRunningTime="2025-12-12 04:49:52.037009458 +0000 UTC m=+982.913026605" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.084294 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" podStartSLOduration=3.77756896 podStartE2EDuration="49.084251739s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:04.914987439 +0000 UTC m=+935.791004586" lastFinishedPulling="2025-12-12 04:49:50.221670218 +0000 UTC m=+981.097687365" observedRunningTime="2025-12-12 04:49:52.081735299 +0000 UTC m=+982.957752456" watchObservedRunningTime="2025-12-12 04:49:52.084251739 +0000 UTC m=+982.960268886" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.915582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" event={"ID":"bb156fa4-57d0-457f-be10-e9c013f37a84","Type":"ContainerStarted","Data":"56598b9c821f2d3c6cb9b1bb09daa9705bef70bbd4cdd69552b176f1db5fd176"} Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.916244 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.917695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" event={"ID":"301fd006-5a61-46bd-b19f-bbd1ba8f7baf","Type":"ContainerStarted","Data":"0c35f22ee5c7ffced1c1424172fa2f1a2ce5b46881b447bd5f3cfec26dce09da"} Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.917767 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.919604 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" event={"ID":"43ac4ab3-1f18-4b18-8a83-1561837988eb","Type":"ContainerStarted","Data":"0ed781ab1ae505b23be8422b05271700779ab2fb4d4a80c642a6f785f890252e"} Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.921322 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.922465 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.954755 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" podStartSLOduration=6.144877668 podStartE2EDuration="49.954730489s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.832461424 +0000 UTC m=+936.708478571" lastFinishedPulling="2025-12-12 04:49:49.642314245 +0000 UTC m=+980.518331392" observedRunningTime="2025-12-12 04:49:52.113902984 +0000 UTC m=+982.989920141" watchObservedRunningTime="2025-12-12 04:49:52.954730489 +0000 UTC m=+983.830747636" Dec 12 04:49:52 crc kubenswrapper[4796]: I1212 04:49:52.959784 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" podStartSLOduration=44.134390979 podStartE2EDuration="49.959773758s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:44.512606331 +0000 UTC m=+975.388623468" lastFinishedPulling="2025-12-12 04:49:50.3379891 +0000 UTC m=+981.214006247" observedRunningTime="2025-12-12 04:49:52.952928163 +0000 UTC m=+983.828945310" watchObservedRunningTime="2025-12-12 04:49:52.959773758 +0000 UTC m=+983.835790905" Dec 12 04:49:53 crc kubenswrapper[4796]: I1212 04:49:53.013550 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" podStartSLOduration=44.304527029 podStartE2EDuration="50.013535695s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:44.512603421 +0000 UTC m=+975.388620568" lastFinishedPulling="2025-12-12 04:49:50.221612087 +0000 UTC m=+981.097629234" observedRunningTime="2025-12-12 04:49:53.009488837 +0000 UTC m=+983.885505974" watchObservedRunningTime="2025-12-12 04:49:53.013535695 +0000 UTC m=+983.889552832" Dec 12 04:49:53 crc kubenswrapper[4796]: I1212 04:49:53.033916 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" podStartSLOduration=3.287768192 podStartE2EDuration="50.033900598s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.21763477 +0000 UTC m=+936.093651917" lastFinishedPulling="2025-12-12 04:49:51.963767176 +0000 UTC m=+982.839784323" observedRunningTime="2025-12-12 04:49:53.030205761 +0000 UTC m=+983.906222908" watchObservedRunningTime="2025-12-12 04:49:53.033900598 +0000 UTC m=+983.909917745" Dec 12 04:49:53 crc kubenswrapper[4796]: I1212 04:49:53.907967 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:49:53 crc kubenswrapper[4796]: I1212 04:49:53.928027 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" event={"ID":"3092bc98-4221-47ff-bae0-06efcfa85522","Type":"ContainerStarted","Data":"875b43428840152c54c9d35f7a2b569c0a5c9934d152f1d8f71f396adce3dd0b"} Dec 12 04:49:53 crc kubenswrapper[4796]: I1212 04:49:53.946260 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" podStartSLOduration=2.694803258 podStartE2EDuration="50.94623937s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.140273828 +0000 UTC m=+936.016290975" lastFinishedPulling="2025-12-12 04:49:53.39170994 +0000 UTC m=+984.267727087" observedRunningTime="2025-12-12 04:49:53.94310433 +0000 UTC m=+984.819121477" watchObservedRunningTime="2025-12-12 04:49:53.94623937 +0000 UTC m=+984.822256527" Dec 12 04:49:53 crc kubenswrapper[4796]: I1212 04:49:53.987757 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-n6m6j" Dec 12 04:49:54 crc kubenswrapper[4796]: I1212 04:49:54.374558 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-rpkcq" Dec 12 04:49:55 crc kubenswrapper[4796]: I1212 04:49:55.458882 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lzzhj" Dec 12 04:49:55 crc kubenswrapper[4796]: I1212 04:49:55.914706 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8bscf" Dec 12 04:49:56 crc kubenswrapper[4796]: I1212 04:49:56.472563 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7775c45dbc-9fh7g" Dec 12 04:49:59 crc kubenswrapper[4796]: I1212 04:49:59.974860 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" event={"ID":"0d5457f7-3a7d-4a0e-a733-33c78860c9b5","Type":"ContainerStarted","Data":"a6adb2787f36e283abf647dd4255b3cf6631c1cdf303df4864dd34b7018de471"} Dec 12 04:49:59 crc kubenswrapper[4796]: I1212 04:49:59.996495 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jz9xq" podStartSLOduration=3.168554228 podStartE2EDuration="55.996452773s" podCreationTimestamp="2025-12-12 04:49:04 +0000 UTC" firstStartedPulling="2025-12-12 04:49:06.153932269 +0000 UTC m=+937.029949416" lastFinishedPulling="2025-12-12 04:49:58.981830804 +0000 UTC m=+989.857847961" observedRunningTime="2025-12-12 04:49:59.994799821 +0000 UTC m=+990.870816988" watchObservedRunningTime="2025-12-12 04:49:59.996452773 +0000 UTC m=+990.872469930" Dec 12 04:50:02 crc kubenswrapper[4796]: I1212 04:50:02.969034 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:50:02 crc kubenswrapper[4796]: I1212 04:50:02.969515 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.002112 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" event={"ID":"7f217e33-5880-42b4-931f-8a4633195ffc","Type":"ContainerStarted","Data":"24bdb2330b347c63253d8b235d18cadc25bbeaab1dc05572d8359708ff25517d"} Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.002713 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.021339 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" podStartSLOduration=4.150462068 podStartE2EDuration="1m0.021317713s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:06.218822217 +0000 UTC m=+937.094839364" lastFinishedPulling="2025-12-12 04:50:02.089677862 +0000 UTC m=+992.965695009" observedRunningTime="2025-12-12 04:50:03.019674471 +0000 UTC m=+993.895691658" watchObservedRunningTime="2025-12-12 04:50:03.021317713 +0000 UTC m=+993.897334870" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.428646 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-27f9h" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.454600 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-qzrqh" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.527920 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-mdmcv" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.609481 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.614368 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jqd2f" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.651212 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-5v7h7" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.847869 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-47x2m" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.905642 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-ws54v" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.987525 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-kgq7g" Dec 12 04:50:03 crc kubenswrapper[4796]: I1212 04:50:03.990454 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-2dggq" Dec 12 04:50:04 crc kubenswrapper[4796]: I1212 04:50:04.149110 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wdgv6" Dec 12 04:50:04 crc kubenswrapper[4796]: I1212 04:50:04.514679 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-gvgzg" Dec 12 04:50:05 crc kubenswrapper[4796]: I1212 04:50:05.023537 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" event={"ID":"bbeb9b29-5dc1-4cdf-94de-397cdb4a32de","Type":"ContainerStarted","Data":"b3cc44f62f5373e1a0c1685fe722903511391b95c19bb23c6297b5432e1d1aba"} Dec 12 04:50:05 crc kubenswrapper[4796]: I1212 04:50:05.023763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:50:05 crc kubenswrapper[4796]: I1212 04:50:05.053512 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" podStartSLOduration=4.033759583 podStartE2EDuration="1m2.053486534s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:06.232440326 +0000 UTC m=+937.108457473" lastFinishedPulling="2025-12-12 04:50:04.252167267 +0000 UTC m=+995.128184424" observedRunningTime="2025-12-12 04:50:05.046799554 +0000 UTC m=+995.922816721" watchObservedRunningTime="2025-12-12 04:50:05.053486534 +0000 UTC m=+995.929503721" Dec 12 04:50:06 crc kubenswrapper[4796]: I1212 04:50:06.031019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" event={"ID":"c14d829a-f63e-404c-b117-65c0e15280e8","Type":"ContainerStarted","Data":"081ca8d06d27b011fd28b362aeac1cbec97861eb9bea4bbe853f701482d576a2"} Dec 12 04:50:06 crc kubenswrapper[4796]: I1212 04:50:06.031475 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:50:06 crc kubenswrapper[4796]: I1212 04:50:06.033507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" event={"ID":"b47de1f3-3223-47bb-a707-72ee23490049","Type":"ContainerStarted","Data":"8fc136b803e96c96022d135ae79390388f5bb33db2469db99cced9519abffbad"} Dec 12 04:50:06 crc kubenswrapper[4796]: I1212 04:50:06.033792 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:50:06 crc kubenswrapper[4796]: I1212 04:50:06.053274 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" podStartSLOduration=3.212481335 podStartE2EDuration="1m3.053258305s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.232451447 +0000 UTC m=+936.108468594" lastFinishedPulling="2025-12-12 04:50:05.073228417 +0000 UTC m=+995.949245564" observedRunningTime="2025-12-12 04:50:06.048819775 +0000 UTC m=+996.924836922" watchObservedRunningTime="2025-12-12 04:50:06.053258305 +0000 UTC m=+996.929275442" Dec 12 04:50:06 crc kubenswrapper[4796]: I1212 04:50:06.067745 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" podStartSLOduration=3.497948054 podStartE2EDuration="1m3.067729142s" podCreationTimestamp="2025-12-12 04:49:03 +0000 UTC" firstStartedPulling="2025-12-12 04:49:05.500960161 +0000 UTC m=+936.376977308" lastFinishedPulling="2025-12-12 04:50:05.070741249 +0000 UTC m=+995.946758396" observedRunningTime="2025-12-12 04:50:06.061378341 +0000 UTC m=+996.937395478" watchObservedRunningTime="2025-12-12 04:50:06.067729142 +0000 UTC m=+996.943746289" Dec 12 04:50:13 crc kubenswrapper[4796]: I1212 04:50:13.822312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-w5fjz" Dec 12 04:50:13 crc kubenswrapper[4796]: I1212 04:50:13.871483 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-6dsjv" Dec 12 04:50:14 crc kubenswrapper[4796]: I1212 04:50:14.334850 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8k4ws" Dec 12 04:50:14 crc kubenswrapper[4796]: I1212 04:50:14.857585 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-jh5td" Dec 12 04:50:32 crc kubenswrapper[4796]: I1212 04:50:32.969768 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:50:32 crc kubenswrapper[4796]: I1212 04:50:32.970223 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.820560 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tttzk"] Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.821873 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.824134 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.825534 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.825652 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.825833 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mh29w" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.831901 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tttzk"] Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.927163 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbn7k"] Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.928265 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.932227 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.938048 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbn7k"] Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.952765 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02ae80d-c473-4918-8ea9-9cbb267c3a01-config\") pod \"dnsmasq-dns-675f4bcbfc-tttzk\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:33 crc kubenswrapper[4796]: I1212 04:50:33.952855 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56vdf\" (UniqueName: \"kubernetes.io/projected/a02ae80d-c473-4918-8ea9-9cbb267c3a01-kube-api-access-56vdf\") pod \"dnsmasq-dns-675f4bcbfc-tttzk\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.054208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02ae80d-c473-4918-8ea9-9cbb267c3a01-config\") pod \"dnsmasq-dns-675f4bcbfc-tttzk\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.054305 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4x6g\" (UniqueName: \"kubernetes.io/projected/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-kube-api-access-b4x6g\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.054345 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.054393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56vdf\" (UniqueName: \"kubernetes.io/projected/a02ae80d-c473-4918-8ea9-9cbb267c3a01-kube-api-access-56vdf\") pod \"dnsmasq-dns-675f4bcbfc-tttzk\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.054418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-config\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.055709 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02ae80d-c473-4918-8ea9-9cbb267c3a01-config\") pod \"dnsmasq-dns-675f4bcbfc-tttzk\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.078131 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56vdf\" (UniqueName: \"kubernetes.io/projected/a02ae80d-c473-4918-8ea9-9cbb267c3a01-kube-api-access-56vdf\") pod \"dnsmasq-dns-675f4bcbfc-tttzk\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.138617 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.155576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4x6g\" (UniqueName: \"kubernetes.io/projected/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-kube-api-access-b4x6g\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.156089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.156201 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-config\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.157365 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-config\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.157366 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.183990 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4x6g\" (UniqueName: \"kubernetes.io/projected/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-kube-api-access-b4x6g\") pod \"dnsmasq-dns-78dd6ddcc-mbn7k\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.240121 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.629581 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tttzk"] Dec 12 04:50:34 crc kubenswrapper[4796]: I1212 04:50:34.704072 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbn7k"] Dec 12 04:50:34 crc kubenswrapper[4796]: W1212 04:50:34.706978 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892e98ac_bba0_47f9_b2ed_462a8f9a7f60.slice/crio-337dcfa94d1fe8283011d7d12214385970d918077dc79737ec77bbeeb6d73db8 WatchSource:0}: Error finding container 337dcfa94d1fe8283011d7d12214385970d918077dc79737ec77bbeeb6d73db8: Status 404 returned error can't find the container with id 337dcfa94d1fe8283011d7d12214385970d918077dc79737ec77bbeeb6d73db8 Dec 12 04:50:35 crc kubenswrapper[4796]: I1212 04:50:35.260180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" event={"ID":"892e98ac-bba0-47f9-b2ed-462a8f9a7f60","Type":"ContainerStarted","Data":"337dcfa94d1fe8283011d7d12214385970d918077dc79737ec77bbeeb6d73db8"} Dec 12 04:50:35 crc kubenswrapper[4796]: I1212 04:50:35.261264 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" event={"ID":"a02ae80d-c473-4918-8ea9-9cbb267c3a01","Type":"ContainerStarted","Data":"645f6398048db92845596dc6f2c5ea9cf822e063bf47f13b6b1965311a6cb475"} Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.596942 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tttzk"] Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.641411 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wbdbm"] Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.642521 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.654822 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wbdbm"] Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.690911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.690979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-config\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.691006 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmfj\" (UniqueName: \"kubernetes.io/projected/32249fd3-e15b-4019-9518-aba16a1c74f3-kube-api-access-8xmfj\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.792046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmfj\" (UniqueName: \"kubernetes.io/projected/32249fd3-e15b-4019-9518-aba16a1c74f3-kube-api-access-8xmfj\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.792138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.792203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-config\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.793200 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-config\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.793901 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.816468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmfj\" (UniqueName: \"kubernetes.io/projected/32249fd3-e15b-4019-9518-aba16a1c74f3-kube-api-access-8xmfj\") pod \"dnsmasq-dns-666b6646f7-wbdbm\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.979624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:50:36 crc kubenswrapper[4796]: I1212 04:50:36.982842 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbn7k"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.021612 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2972q"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.023150 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.070392 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2972q"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.200474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.200539 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-config\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.200897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hkth\" (UniqueName: \"kubernetes.io/projected/778cd05f-f671-4ec9-b48f-97f9af5f848a-kube-api-access-8hkth\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.301959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hkth\" (UniqueName: \"kubernetes.io/projected/778cd05f-f671-4ec9-b48f-97f9af5f848a-kube-api-access-8hkth\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.302005 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.302031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-config\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.304485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-config\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.306197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.332021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hkth\" (UniqueName: \"kubernetes.io/projected/778cd05f-f671-4ec9-b48f-97f9af5f848a-kube-api-access-8hkth\") pod \"dnsmasq-dns-57d769cc4f-2972q\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.354134 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.699463 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wbdbm"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.796116 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.799725 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.805298 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xsbpn" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.805501 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.805685 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.805810 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.805855 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.805975 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.806461 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.819445 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.905407 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2972q"] Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918445 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918481 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f474c7f-e87c-4c21-8ebb-f0266779bceb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918716 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918744 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918886 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f474c7f-e87c-4c21-8ebb-f0266779bceb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.918947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.919014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.919044 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:37 crc kubenswrapper[4796]: I1212 04:50:37.919087 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrk4\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-kube-api-access-bdrk4\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.020398 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.020443 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.020511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrk4\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-kube-api-access-bdrk4\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.020530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021803 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021857 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021875 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f474c7f-e87c-4c21-8ebb-f0266779bceb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.021992 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f474c7f-e87c-4c21-8ebb-f0266779bceb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.022016 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.022388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.023195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.024546 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.025038 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.034827 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f474c7f-e87c-4c21-8ebb-f0266779bceb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.034994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f474c7f-e87c-4c21-8ebb-f0266779bceb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.035949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.041654 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrk4\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-kube-api-access-bdrk4\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.042472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.054432 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.137815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.177113 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.178357 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.183665 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.183853 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.183960 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.184064 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.184160 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9dzjt" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.184348 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.184458 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.197544 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227526 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227672 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227692 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227706 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227734 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6jf\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-kube-api-access-xv6jf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.227757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv6jf\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-kube-api-access-xv6jf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.333853 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.335044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.338042 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.341628 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.342150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.343956 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.344474 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.345371 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.356533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.368041 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.375630 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" event={"ID":"32249fd3-e15b-4019-9518-aba16a1c74f3","Type":"ContainerStarted","Data":"adf28bc813bea565cdf477e81f1b6d5d1833795b91c38e14fa75204949da5e09"} Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.379009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.391710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" event={"ID":"778cd05f-f671-4ec9-b48f-97f9af5f848a","Type":"ContainerStarted","Data":"d23eabf9d6ec77d1dfbcec02167a505c92267bf950e93b854448f9757ea49e10"} Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.424647 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv6jf\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-kube-api-access-xv6jf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.427792 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.514312 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.759934 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:50:38 crc kubenswrapper[4796]: W1212 04:50:38.797561 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f474c7f_e87c_4c21_8ebb_f0266779bceb.slice/crio-70d3255715ab3f7a36aa4bf09994fb998e08d3dc921aaa8a2ada5b6f017a830d WatchSource:0}: Error finding container 70d3255715ab3f7a36aa4bf09994fb998e08d3dc921aaa8a2ada5b6f017a830d: Status 404 returned error can't find the container with id 70d3255715ab3f7a36aa4bf09994fb998e08d3dc921aaa8a2ada5b6f017a830d Dec 12 04:50:38 crc kubenswrapper[4796]: I1212 04:50:38.818642 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:50:38 crc kubenswrapper[4796]: W1212 04:50:38.836568 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ec4e97_93b3_46f0_9b09_76c22a3ed215.slice/crio-58b494214f2fc0f0a7e3dbe96cb70c41346e47015294fea2251124227a97563e WatchSource:0}: Error finding container 58b494214f2fc0f0a7e3dbe96cb70c41346e47015294fea2251124227a97563e: Status 404 returned error can't find the container with id 58b494214f2fc0f0a7e3dbe96cb70c41346e47015294fea2251124227a97563e Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.399097 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.400191 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.402757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.412197 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ngbs9" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.412361 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.412620 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.414170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.477651 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0ec4e97-93b3-46f0-9b09-76c22a3ed215","Type":"ContainerStarted","Data":"58b494214f2fc0f0a7e3dbe96cb70c41346e47015294fea2251124227a97563e"} Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.477986 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.478087 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f474c7f-e87c-4c21-8ebb-f0266779bceb","Type":"ContainerStarted","Data":"70d3255715ab3f7a36aa4bf09994fb998e08d3dc921aaa8a2ada5b6f017a830d"} Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4b59263e-1bd8-4661-b612-2f4bc4f611f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552327 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qz5n\" (UniqueName: \"kubernetes.io/projected/4b59263e-1bd8-4661-b612-2f4bc4f611f1-kube-api-access-7qz5n\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552413 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59263e-1bd8-4661-b612-2f4bc4f611f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552643 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b59263e-1bd8-4661-b612-2f4bc4f611f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.552694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655241 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59263e-1bd8-4661-b612-2f4bc4f611f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b59263e-1bd8-4661-b612-2f4bc4f611f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4b59263e-1bd8-4661-b612-2f4bc4f611f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655461 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qz5n\" (UniqueName: \"kubernetes.io/projected/4b59263e-1bd8-4661-b612-2f4bc4f611f1-kube-api-access-7qz5n\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.655480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.657169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.663906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4b59263e-1bd8-4661-b612-2f4bc4f611f1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.665437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-kolla-config\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.666816 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4b59263e-1bd8-4661-b612-2f4bc4f611f1-config-data-default\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.667736 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.676868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b59263e-1bd8-4661-b612-2f4bc4f611f1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.682005 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b59263e-1bd8-4661-b612-2f4bc4f611f1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.689258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qz5n\" (UniqueName: \"kubernetes.io/projected/4b59263e-1bd8-4661-b612-2f4bc4f611f1-kube-api-access-7qz5n\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.728733 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"4b59263e-1bd8-4661-b612-2f4bc4f611f1\") " pod="openstack/openstack-galera-0" Dec 12 04:50:39 crc kubenswrapper[4796]: I1212 04:50:39.750166 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.404025 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.885185 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.886704 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.889507 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.889680 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qsfrd" Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.890890 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.892017 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 12 04:50:40 crc kubenswrapper[4796]: I1212 04:50:40.938790 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.006767 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338cb28-50b7-41c6-af36-ec2fb86fb949-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.006830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.006895 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.006930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.006957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3338cb28-50b7-41c6-af36-ec2fb86fb949-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.006983 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.007008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3338cb28-50b7-41c6-af36-ec2fb86fb949-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.007029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frt9l\" (UniqueName: \"kubernetes.io/projected/3338cb28-50b7-41c6-af36-ec2fb86fb949-kube-api-access-frt9l\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.107827 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3338cb28-50b7-41c6-af36-ec2fb86fb949-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.107873 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frt9l\" (UniqueName: \"kubernetes.io/projected/3338cb28-50b7-41c6-af36-ec2fb86fb949-kube-api-access-frt9l\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.107898 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338cb28-50b7-41c6-af36-ec2fb86fb949-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.107933 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.107965 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.107995 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.108020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3338cb28-50b7-41c6-af36-ec2fb86fb949-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.108044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.108289 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3338cb28-50b7-41c6-af36-ec2fb86fb949-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.108556 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.109180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.110243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.110428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3338cb28-50b7-41c6-af36-ec2fb86fb949-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.155439 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3338cb28-50b7-41c6-af36-ec2fb86fb949-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.155451 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338cb28-50b7-41c6-af36-ec2fb86fb949-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.183715 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frt9l\" (UniqueName: \"kubernetes.io/projected/3338cb28-50b7-41c6-af36-ec2fb86fb949-kube-api-access-frt9l\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.184497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3338cb28-50b7-41c6-af36-ec2fb86fb949\") " pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.229757 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.315260 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.316492 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.321262 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-76l6m" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.321470 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.321644 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.326498 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.415026 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxph2\" (UniqueName: \"kubernetes.io/projected/497a4966-f578-46a2-a33c-c3288f96f7f1-kube-api-access-cxph2\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.415132 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/497a4966-f578-46a2-a33c-c3288f96f7f1-config-data\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.415154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a4966-f578-46a2-a33c-c3288f96f7f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.415168 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497a4966-f578-46a2-a33c-c3288f96f7f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.415190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/497a4966-f578-46a2-a33c-c3288f96f7f1-kolla-config\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.516116 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxph2\" (UniqueName: \"kubernetes.io/projected/497a4966-f578-46a2-a33c-c3288f96f7f1-kube-api-access-cxph2\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.516192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/497a4966-f578-46a2-a33c-c3288f96f7f1-config-data\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.516213 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a4966-f578-46a2-a33c-c3288f96f7f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.516227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497a4966-f578-46a2-a33c-c3288f96f7f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.516251 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/497a4966-f578-46a2-a33c-c3288f96f7f1-kolla-config\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.517736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/497a4966-f578-46a2-a33c-c3288f96f7f1-kolla-config\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.519894 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/497a4966-f578-46a2-a33c-c3288f96f7f1-config-data\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.531765 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4b59263e-1bd8-4661-b612-2f4bc4f611f1","Type":"ContainerStarted","Data":"808bc5ccf6a6b4a7e749220f4128f79f1a95c6b34a813c0bd185d60dac49540d"} Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.533924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497a4966-f578-46a2-a33c-c3288f96f7f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.535853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/497a4966-f578-46a2-a33c-c3288f96f7f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.566581 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxph2\" (UniqueName: \"kubernetes.io/projected/497a4966-f578-46a2-a33c-c3288f96f7f1-kube-api-access-cxph2\") pod \"memcached-0\" (UID: \"497a4966-f578-46a2-a33c-c3288f96f7f1\") " pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.731687 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 12 04:50:41 crc kubenswrapper[4796]: I1212 04:50:41.910844 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 12 04:50:42 crc kubenswrapper[4796]: I1212 04:50:42.514735 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 12 04:50:42 crc kubenswrapper[4796]: I1212 04:50:42.592949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"497a4966-f578-46a2-a33c-c3288f96f7f1","Type":"ContainerStarted","Data":"1ca152b47a6c28230fee968338243075fb74f517c012c1f98d5498f902691c7f"} Dec 12 04:50:42 crc kubenswrapper[4796]: I1212 04:50:42.601903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3338cb28-50b7-41c6-af36-ec2fb86fb949","Type":"ContainerStarted","Data":"5670db5d30a068ed95c4c076fd84ac809596c2eedc42e56a6118d28085b4a938"} Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.099305 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.105470 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.116240 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-298qt" Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.161205 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.260938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l69x2\" (UniqueName: \"kubernetes.io/projected/3857d6ad-7515-4600-8e29-a5e3182f5253-kube-api-access-l69x2\") pod \"kube-state-metrics-0\" (UID: \"3857d6ad-7515-4600-8e29-a5e3182f5253\") " pod="openstack/kube-state-metrics-0" Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.362488 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l69x2\" (UniqueName: \"kubernetes.io/projected/3857d6ad-7515-4600-8e29-a5e3182f5253-kube-api-access-l69x2\") pod \"kube-state-metrics-0\" (UID: \"3857d6ad-7515-4600-8e29-a5e3182f5253\") " pod="openstack/kube-state-metrics-0" Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.423522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l69x2\" (UniqueName: \"kubernetes.io/projected/3857d6ad-7515-4600-8e29-a5e3182f5253-kube-api-access-l69x2\") pod \"kube-state-metrics-0\" (UID: \"3857d6ad-7515-4600-8e29-a5e3182f5253\") " pod="openstack/kube-state-metrics-0" Dec 12 04:50:43 crc kubenswrapper[4796]: I1212 04:50:43.470834 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 04:50:44 crc kubenswrapper[4796]: I1212 04:50:44.187634 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:50:44 crc kubenswrapper[4796]: W1212 04:50:44.250537 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3857d6ad_7515_4600_8e29_a5e3182f5253.slice/crio-2d84d331452e06d1b7fc471ddb852799c3a40cae241a21d742e9d05665994aea WatchSource:0}: Error finding container 2d84d331452e06d1b7fc471ddb852799c3a40cae241a21d742e9d05665994aea: Status 404 returned error can't find the container with id 2d84d331452e06d1b7fc471ddb852799c3a40cae241a21d742e9d05665994aea Dec 12 04:50:44 crc kubenswrapper[4796]: I1212 04:50:44.679362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3857d6ad-7515-4600-8e29-a5e3182f5253","Type":"ContainerStarted","Data":"2d84d331452e06d1b7fc471ddb852799c3a40cae241a21d742e9d05665994aea"} Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.606740 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ml9sj"] Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.607927 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.611346 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.611562 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x2ph4" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.612599 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.622132 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ml9sj"] Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.683821 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9xcn6"] Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.687774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.699139 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9xcn6"] Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.740580 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0751eb6e-3452-4b8d-abfa-d37121e1a03e-scripts\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.740666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-run-ovn\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.740690 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0751eb6e-3452-4b8d-abfa-d37121e1a03e-ovn-controller-tls-certs\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.740719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-run\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.740749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0751eb6e-3452-4b8d-abfa-d37121e1a03e-combined-ca-bundle\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.741000 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbrw\" (UniqueName: \"kubernetes.io/projected/0751eb6e-3452-4b8d-abfa-d37121e1a03e-kube-api-access-cqbrw\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.741068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-log-ovn\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842314 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-etc-ovs\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-log\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tnrf\" (UniqueName: \"kubernetes.io/projected/1f98d057-864c-464b-91e7-85c6462f8afb-kube-api-access-4tnrf\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-lib\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842450 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-run-ovn\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842470 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0751eb6e-3452-4b8d-abfa-d37121e1a03e-ovn-controller-tls-certs\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-run\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0751eb6e-3452-4b8d-abfa-d37121e1a03e-combined-ca-bundle\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqbrw\" (UniqueName: \"kubernetes.io/projected/0751eb6e-3452-4b8d-abfa-d37121e1a03e-kube-api-access-cqbrw\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842566 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-run\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-log-ovn\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842602 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f98d057-864c-464b-91e7-85c6462f8afb-scripts\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.842633 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0751eb6e-3452-4b8d-abfa-d37121e1a03e-scripts\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.844367 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-run-ovn\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.846865 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0751eb6e-3452-4b8d-abfa-d37121e1a03e-scripts\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.847529 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-run\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.847995 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0751eb6e-3452-4b8d-abfa-d37121e1a03e-var-log-ovn\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.853978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0751eb6e-3452-4b8d-abfa-d37121e1a03e-combined-ca-bundle\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.867617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqbrw\" (UniqueName: \"kubernetes.io/projected/0751eb6e-3452-4b8d-abfa-d37121e1a03e-kube-api-access-cqbrw\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.869852 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0751eb6e-3452-4b8d-abfa-d37121e1a03e-ovn-controller-tls-certs\") pod \"ovn-controller-ml9sj\" (UID: \"0751eb6e-3452-4b8d-abfa-d37121e1a03e\") " pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.942592 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ml9sj" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.950247 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-lib\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.950333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-run\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.950357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f98d057-864c-464b-91e7-85c6462f8afb-scripts\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.950412 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-etc-ovs\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.950440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-log\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.950547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tnrf\" (UniqueName: \"kubernetes.io/projected/1f98d057-864c-464b-91e7-85c6462f8afb-kube-api-access-4tnrf\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.952021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-etc-ovs\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.952172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-lib\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.952222 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-run\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.952310 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1f98d057-864c-464b-91e7-85c6462f8afb-var-log\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:46 crc kubenswrapper[4796]: I1212 04:50:46.953508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f98d057-864c-464b-91e7-85c6462f8afb-scripts\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.003736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tnrf\" (UniqueName: \"kubernetes.io/projected/1f98d057-864c-464b-91e7-85c6462f8afb-kube-api-access-4tnrf\") pod \"ovn-controller-ovs-9xcn6\" (UID: \"1f98d057-864c-464b-91e7-85c6462f8afb\") " pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.029190 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.119147 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.125888 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.128520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hdw2c" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.128689 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.128906 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.129441 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.130119 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.131437 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.278861 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.278919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.278953 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7de6f8df-6271-4b09-94c5-642c37337fcf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.278978 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.279008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7de6f8df-6271-4b09-94c5-642c37337fcf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.279037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.279066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnrn\" (UniqueName: \"kubernetes.io/projected/7de6f8df-6271-4b09-94c5-642c37337fcf-kube-api-access-dwnrn\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.279091 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6f8df-6271-4b09-94c5-642c37337fcf-config\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.380613 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.380927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwnrn\" (UniqueName: \"kubernetes.io/projected/7de6f8df-6271-4b09-94c5-642c37337fcf-kube-api-access-dwnrn\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.380959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6f8df-6271-4b09-94c5-642c37337fcf-config\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.380993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.381015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.381043 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7de6f8df-6271-4b09-94c5-642c37337fcf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.381068 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.382807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7de6f8df-6271-4b09-94c5-642c37337fcf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.381096 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7de6f8df-6271-4b09-94c5-642c37337fcf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.393459 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7de6f8df-6271-4b09-94c5-642c37337fcf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.393950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6f8df-6271-4b09-94c5-642c37337fcf-config\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.394237 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.401030 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.417300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.418079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de6f8df-6271-4b09-94c5-642c37337fcf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.442315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwnrn\" (UniqueName: \"kubernetes.io/projected/7de6f8df-6271-4b09-94c5-642c37337fcf-kube-api-access-dwnrn\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.461339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7de6f8df-6271-4b09-94c5-642c37337fcf\") " pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:47 crc kubenswrapper[4796]: I1212 04:50:47.510469 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.516062 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.517854 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.525139 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.525737 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ksn9n" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.525881 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.526075 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.537528 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a25b030-6ebc-4ac2-8114-f24663c7a815-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg2vl\" (UniqueName: \"kubernetes.io/projected/5a25b030-6ebc-4ac2-8114-f24663c7a815-kube-api-access-qg2vl\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650658 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650687 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a25b030-6ebc-4ac2-8114-f24663c7a815-config\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650715 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a25b030-6ebc-4ac2-8114-f24663c7a815-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.650748 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.751966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a25b030-6ebc-4ac2-8114-f24663c7a815-config\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752022 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a25b030-6ebc-4ac2-8114-f24663c7a815-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752068 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752116 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752211 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a25b030-6ebc-4ac2-8114-f24663c7a815-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg2vl\" (UniqueName: \"kubernetes.io/projected/5a25b030-6ebc-4ac2-8114-f24663c7a815-kube-api-access-qg2vl\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752546 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.752929 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a25b030-6ebc-4ac2-8114-f24663c7a815-config\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.753518 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a25b030-6ebc-4ac2-8114-f24663c7a815-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.754484 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a25b030-6ebc-4ac2-8114-f24663c7a815-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.760492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.760891 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.761999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a25b030-6ebc-4ac2-8114-f24663c7a815-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.772840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg2vl\" (UniqueName: \"kubernetes.io/projected/5a25b030-6ebc-4ac2-8114-f24663c7a815-kube-api-access-qg2vl\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.777171 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5a25b030-6ebc-4ac2-8114-f24663c7a815\") " pod="openstack/ovsdbserver-sb-0" Dec 12 04:50:50 crc kubenswrapper[4796]: I1212 04:50:50.850065 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 12 04:51:02 crc kubenswrapper[4796]: E1212 04:51:02.378168 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 12 04:51:02 crc kubenswrapper[4796]: E1212 04:51:02.379000 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv6jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e0ec4e97-93b3-46f0-9b09-76c22a3ed215): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:02 crc kubenswrapper[4796]: E1212 04:51:02.380322 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" Dec 12 04:51:02 crc kubenswrapper[4796]: E1212 04:51:02.872962 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" Dec 12 04:51:02 crc kubenswrapper[4796]: I1212 04:51:02.969464 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:51:02 crc kubenswrapper[4796]: I1212 04:51:02.969521 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:51:02 crc kubenswrapper[4796]: I1212 04:51:02.969570 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:51:02 crc kubenswrapper[4796]: I1212 04:51:02.970317 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1733a30215adfd71b24cb88a4cee9d965e3cb0a10cc8f3339202f4fa5f80086c"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:51:02 crc kubenswrapper[4796]: I1212 04:51:02.970376 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://1733a30215adfd71b24cb88a4cee9d965e3cb0a10cc8f3339202f4fa5f80086c" gracePeriod=600 Dec 12 04:51:03 crc kubenswrapper[4796]: I1212 04:51:03.884762 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="1733a30215adfd71b24cb88a4cee9d965e3cb0a10cc8f3339202f4fa5f80086c" exitCode=0 Dec 12 04:51:03 crc kubenswrapper[4796]: I1212 04:51:03.884805 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"1733a30215adfd71b24cb88a4cee9d965e3cb0a10cc8f3339202f4fa5f80086c"} Dec 12 04:51:03 crc kubenswrapper[4796]: I1212 04:51:03.884839 4796 scope.go:117] "RemoveContainer" containerID="276b61fb2fa37553e2279ac84eab51942aa3dddc3e5b7b40311531ace1182b7d" Dec 12 04:51:09 crc kubenswrapper[4796]: E1212 04:51:09.642807 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 12 04:51:09 crc kubenswrapper[4796]: E1212 04:51:09.643571 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(8f474c7f-e87c-4c21-8ebb-f0266779bceb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:09 crc kubenswrapper[4796]: E1212 04:51:09.644855 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" Dec 12 04:51:09 crc kubenswrapper[4796]: E1212 04:51:09.935112 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.479451 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.481147 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qz5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(4b59263e-1bd8-4661-b612-2f4bc4f611f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.482557 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="4b59263e-1bd8-4661-b612-2f4bc4f611f1" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.487945 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.488143 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frt9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(3338cb28-50b7-41c6-af36-ec2fb86fb949): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.490601 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="3338cb28-50b7-41c6-af36-ec2fb86fb949" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.947665 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="3338cb28-50b7-41c6-af36-ec2fb86fb949" Dec 12 04:51:11 crc kubenswrapper[4796]: E1212 04:51:11.948096 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="4b59263e-1bd8-4661-b612-2f4bc4f611f1" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.153045 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.153203 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n8dh669hbbh646h56fh74h64ch5bh5b9h5bfh578h55ch7dh85h648h66fh59chfdh5d6h96h6fh76h546hbch57dh5fh65fhf7h647h5f7h588h59q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxph2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(497a4966-f578-46a2-a33c-c3288f96f7f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.154670 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="497a4966-f578-46a2-a33c-c3288f96f7f1" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.956591 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.956745 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56vdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tttzk_openstack(a02ae80d-c473-4918-8ea9-9cbb267c3a01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.957970 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" podUID="a02ae80d-c473-4918-8ea9-9cbb267c3a01" Dec 12 04:51:12 crc kubenswrapper[4796]: E1212 04:51:12.961981 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="497a4966-f578-46a2-a33c-c3288f96f7f1" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.056707 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.057067 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4x6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mbn7k_openstack(892e98ac-bba0-47f9-b2ed-462a8f9a7f60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.063979 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" podUID="892e98ac-bba0-47f9-b2ed-462a8f9a7f60" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.078388 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.078499 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hkth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2972q_openstack(778cd05f-f671-4ec9-b48f-97f9af5f848a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.080788 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.092216 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.092399 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xmfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-wbdbm_openstack(32249fd3-e15b-4019-9518-aba16a1c74f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.094431 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" Dec 12 04:51:13 crc kubenswrapper[4796]: I1212 04:51:13.733612 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ml9sj"] Dec 12 04:51:13 crc kubenswrapper[4796]: I1212 04:51:13.908087 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9xcn6"] Dec 12 04:51:13 crc kubenswrapper[4796]: I1212 04:51:13.967434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"c972db73eaab2458f98bcc92148f56e7f3d05de16f8aaa63f617c41f460205f5"} Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.969330 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" Dec 12 04:51:13 crc kubenswrapper[4796]: E1212 04:51:13.969542 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.473056 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 12 04:51:14 crc kubenswrapper[4796]: W1212 04:51:14.529744 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a25b030_6ebc_4ac2_8114_f24663c7a815.slice/crio-5616a37f39f1f66a8c18c0ed477cd30a0e804c258cb573bb276dea987f398469 WatchSource:0}: Error finding container 5616a37f39f1f66a8c18c0ed477cd30a0e804c258cb573bb276dea987f398469: Status 404 returned error can't find the container with id 5616a37f39f1f66a8c18c0ed477cd30a0e804c258cb573bb276dea987f398469 Dec 12 04:51:14 crc kubenswrapper[4796]: E1212 04:51:14.532491 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 12 04:51:14 crc kubenswrapper[4796]: E1212 04:51:14.532533 4796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 12 04:51:14 crc kubenswrapper[4796]: E1212 04:51:14.532656 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l69x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(3857d6ad-7515-4600-8e29-a5e3182f5253): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 04:51:14 crc kubenswrapper[4796]: E1212 04:51:14.534084 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.638859 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.648973 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.802220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4x6g\" (UniqueName: \"kubernetes.io/projected/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-kube-api-access-b4x6g\") pod \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.802407 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-config\") pod \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.802468 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-dns-svc\") pod \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\" (UID: \"892e98ac-bba0-47f9-b2ed-462a8f9a7f60\") " Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.802500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02ae80d-c473-4918-8ea9-9cbb267c3a01-config\") pod \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.802525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56vdf\" (UniqueName: \"kubernetes.io/projected/a02ae80d-c473-4918-8ea9-9cbb267c3a01-kube-api-access-56vdf\") pod \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\" (UID: \"a02ae80d-c473-4918-8ea9-9cbb267c3a01\") " Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.803208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "892e98ac-bba0-47f9-b2ed-462a8f9a7f60" (UID: "892e98ac-bba0-47f9-b2ed-462a8f9a7f60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.803265 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a02ae80d-c473-4918-8ea9-9cbb267c3a01-config" (OuterVolumeSpecName: "config") pod "a02ae80d-c473-4918-8ea9-9cbb267c3a01" (UID: "a02ae80d-c473-4918-8ea9-9cbb267c3a01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.803272 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-config" (OuterVolumeSpecName: "config") pod "892e98ac-bba0-47f9-b2ed-462a8f9a7f60" (UID: "892e98ac-bba0-47f9-b2ed-462a8f9a7f60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.803834 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.803860 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.803873 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02ae80d-c473-4918-8ea9-9cbb267c3a01-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.807878 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-kube-api-access-b4x6g" (OuterVolumeSpecName: "kube-api-access-b4x6g") pod "892e98ac-bba0-47f9-b2ed-462a8f9a7f60" (UID: "892e98ac-bba0-47f9-b2ed-462a8f9a7f60"). InnerVolumeSpecName "kube-api-access-b4x6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.809584 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02ae80d-c473-4918-8ea9-9cbb267c3a01-kube-api-access-56vdf" (OuterVolumeSpecName: "kube-api-access-56vdf") pod "a02ae80d-c473-4918-8ea9-9cbb267c3a01" (UID: "a02ae80d-c473-4918-8ea9-9cbb267c3a01"). InnerVolumeSpecName "kube-api-access-56vdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.905535 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56vdf\" (UniqueName: \"kubernetes.io/projected/a02ae80d-c473-4918-8ea9-9cbb267c3a01-kube-api-access-56vdf\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.905562 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4x6g\" (UniqueName: \"kubernetes.io/projected/892e98ac-bba0-47f9-b2ed-462a8f9a7f60-kube-api-access-b4x6g\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.973324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" event={"ID":"892e98ac-bba0-47f9-b2ed-462a8f9a7f60","Type":"ContainerDied","Data":"337dcfa94d1fe8283011d7d12214385970d918077dc79737ec77bbeeb6d73db8"} Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.973400 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mbn7k" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.975176 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5a25b030-6ebc-4ac2-8114-f24663c7a815","Type":"ContainerStarted","Data":"5616a37f39f1f66a8c18c0ed477cd30a0e804c258cb573bb276dea987f398469"} Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.975857 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" event={"ID":"a02ae80d-c473-4918-8ea9-9cbb267c3a01","Type":"ContainerDied","Data":"645f6398048db92845596dc6f2c5ea9cf822e063bf47f13b6b1965311a6cb475"} Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.975914 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tttzk" Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.977144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ml9sj" event={"ID":"0751eb6e-3452-4b8d-abfa-d37121e1a03e","Type":"ContainerStarted","Data":"f19d501b55440b60e2261427399af8e0696e6693645b6228f0dd200d73e28140"} Dec 12 04:51:14 crc kubenswrapper[4796]: I1212 04:51:14.978872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xcn6" event={"ID":"1f98d057-864c-464b-91e7-85c6462f8afb","Type":"ContainerStarted","Data":"9a2bcd9396c1080de402976ed0e147875eeb0376359b5174ea0531f734d94fbe"} Dec 12 04:51:14 crc kubenswrapper[4796]: E1212 04:51:14.985440 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.118358 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tttzk"] Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.126502 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tttzk"] Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.173479 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbn7k"] Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.183869 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mbn7k"] Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.420364 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892e98ac-bba0-47f9-b2ed-462a8f9a7f60" path="/var/lib/kubelet/pods/892e98ac-bba0-47f9-b2ed-462a8f9a7f60/volumes" Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.421046 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02ae80d-c473-4918-8ea9-9cbb267c3a01" path="/var/lib/kubelet/pods/a02ae80d-c473-4918-8ea9-9cbb267c3a01/volumes" Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.547153 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 12 04:51:15 crc kubenswrapper[4796]: W1212 04:51:15.555569 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7de6f8df_6271_4b09_94c5_642c37337fcf.slice/crio-7917b9eed1b24eb71c63bd5a42aaff77ed08870f9711e3f88ccd0ce4dcdba233 WatchSource:0}: Error finding container 7917b9eed1b24eb71c63bd5a42aaff77ed08870f9711e3f88ccd0ce4dcdba233: Status 404 returned error can't find the container with id 7917b9eed1b24eb71c63bd5a42aaff77ed08870f9711e3f88ccd0ce4dcdba233 Dec 12 04:51:15 crc kubenswrapper[4796]: I1212 04:51:15.987866 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7de6f8df-6271-4b09-94c5-642c37337fcf","Type":"ContainerStarted","Data":"7917b9eed1b24eb71c63bd5a42aaff77ed08870f9711e3f88ccd0ce4dcdba233"} Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.025885 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5a25b030-6ebc-4ac2-8114-f24663c7a815","Type":"ContainerStarted","Data":"3d4c2ebb7ef3c0dc8fd0045cdaaa97f3d7a6df17c6510a602340b9208846cd8c"} Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.027304 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7de6f8df-6271-4b09-94c5-642c37337fcf","Type":"ContainerStarted","Data":"16f860074f2ff8d0a0aca182ecfef748e98769ac6f94f0c635ede4eae427c331"} Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.030973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ml9sj" event={"ID":"0751eb6e-3452-4b8d-abfa-d37121e1a03e","Type":"ContainerStarted","Data":"0d36f02eafab2931fb86e6077601252b606ab425feabf3be49b5a8cc5bba7208"} Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.031099 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ml9sj" Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.033342 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f98d057-864c-464b-91e7-85c6462f8afb" containerID="3aeebd51c7b6eac4bb889199998a74f05f5aa030b4ad9def8ea799d75fab8fce" exitCode=0 Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.033403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xcn6" event={"ID":"1f98d057-864c-464b-91e7-85c6462f8afb","Type":"ContainerDied","Data":"3aeebd51c7b6eac4bb889199998a74f05f5aa030b4ad9def8ea799d75fab8fce"} Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.036437 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0ec4e97-93b3-46f0-9b09-76c22a3ed215","Type":"ContainerStarted","Data":"0638c4b39def9c37b4ed634dc7f7190e375875f7342113d40c4cbff5aad06f38"} Dec 12 04:51:20 crc kubenswrapper[4796]: I1212 04:51:20.051264 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ml9sj" podStartSLOduration=29.965889711 podStartE2EDuration="34.051251146s" podCreationTimestamp="2025-12-12 04:50:46 +0000 UTC" firstStartedPulling="2025-12-12 04:51:14.578858013 +0000 UTC m=+1065.454875160" lastFinishedPulling="2025-12-12 04:51:18.664219458 +0000 UTC m=+1069.540236595" observedRunningTime="2025-12-12 04:51:20.049167691 +0000 UTC m=+1070.925184838" watchObservedRunningTime="2025-12-12 04:51:20.051251146 +0000 UTC m=+1070.927268293" Dec 12 04:51:21 crc kubenswrapper[4796]: I1212 04:51:21.046675 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xcn6" event={"ID":"1f98d057-864c-464b-91e7-85c6462f8afb","Type":"ContainerStarted","Data":"9a6fe6baf1a94b7e17e281eee3589fcbd1cd076f368ffbd1bd5702c1a5ae9f4f"} Dec 12 04:51:21 crc kubenswrapper[4796]: I1212 04:51:21.046969 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xcn6" event={"ID":"1f98d057-864c-464b-91e7-85c6462f8afb","Type":"ContainerStarted","Data":"4e1231e701a668b5de9280fd60df4d11b86113f1c89f4ac1a941ef27ca609c78"} Dec 12 04:51:21 crc kubenswrapper[4796]: I1212 04:51:21.081569 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9xcn6" podStartSLOduration=30.982410472 podStartE2EDuration="35.081547198s" podCreationTimestamp="2025-12-12 04:50:46 +0000 UTC" firstStartedPulling="2025-12-12 04:51:14.528558947 +0000 UTC m=+1065.404576094" lastFinishedPulling="2025-12-12 04:51:18.627695673 +0000 UTC m=+1069.503712820" observedRunningTime="2025-12-12 04:51:21.077912925 +0000 UTC m=+1071.953930072" watchObservedRunningTime="2025-12-12 04:51:21.081547198 +0000 UTC m=+1071.957564345" Dec 12 04:51:22 crc kubenswrapper[4796]: I1212 04:51:22.030117 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:51:22 crc kubenswrapper[4796]: I1212 04:51:22.030379 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:51:27 crc kubenswrapper[4796]: I1212 04:51:27.089218 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5a25b030-6ebc-4ac2-8114-f24663c7a815","Type":"ContainerStarted","Data":"a79c6e969d07aa131d503bc75c1d3fb6101bc285c7232022f27e697d644e4573"} Dec 12 04:51:27 crc kubenswrapper[4796]: I1212 04:51:27.091505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7de6f8df-6271-4b09-94c5-642c37337fcf","Type":"ContainerStarted","Data":"ccf76d2a78745d7d356c543f6604ae821fc654301a7a32e6eef58947c52a8a76"} Dec 12 04:51:27 crc kubenswrapper[4796]: I1212 04:51:27.094260 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4b59263e-1bd8-4661-b612-2f4bc4f611f1","Type":"ContainerStarted","Data":"3401350b445e5934027a981214cb6096cec24a668cb70b017008d88afc04e014"} Dec 12 04:51:27 crc kubenswrapper[4796]: I1212 04:51:27.111410 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.199092918 podStartE2EDuration="38.111390739s" podCreationTimestamp="2025-12-12 04:50:49 +0000 UTC" firstStartedPulling="2025-12-12 04:51:14.536140115 +0000 UTC m=+1065.412157262" lastFinishedPulling="2025-12-12 04:51:26.448437926 +0000 UTC m=+1077.324455083" observedRunningTime="2025-12-12 04:51:27.105453263 +0000 UTC m=+1077.981470420" watchObservedRunningTime="2025-12-12 04:51:27.111390739 +0000 UTC m=+1077.987407896" Dec 12 04:51:27 crc kubenswrapper[4796]: I1212 04:51:27.131332 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.240832737 podStartE2EDuration="41.131310062s" podCreationTimestamp="2025-12-12 04:50:46 +0000 UTC" firstStartedPulling="2025-12-12 04:51:15.55793747 +0000 UTC m=+1066.433954617" lastFinishedPulling="2025-12-12 04:51:26.448414795 +0000 UTC m=+1077.324431942" observedRunningTime="2025-12-12 04:51:27.123017252 +0000 UTC m=+1077.999034409" watchObservedRunningTime="2025-12-12 04:51:27.131310062 +0000 UTC m=+1078.007327219" Dec 12 04:51:27 crc kubenswrapper[4796]: I1212 04:51:27.510699 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 12 04:51:28 crc kubenswrapper[4796]: I1212 04:51:28.103336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f474c7f-e87c-4c21-8ebb-f0266779bceb","Type":"ContainerStarted","Data":"dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681"} Dec 12 04:51:28 crc kubenswrapper[4796]: I1212 04:51:28.105435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" event={"ID":"778cd05f-f671-4ec9-b48f-97f9af5f848a","Type":"ContainerStarted","Data":"fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627"} Dec 12 04:51:28 crc kubenswrapper[4796]: I1212 04:51:28.107154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"497a4966-f578-46a2-a33c-c3288f96f7f1","Type":"ContainerStarted","Data":"875be89faf848140fd2ee4ea5a51856488448c0353e6a78d615c9be3bba26078"} Dec 12 04:51:28 crc kubenswrapper[4796]: I1212 04:51:28.107402 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 12 04:51:28 crc kubenswrapper[4796]: I1212 04:51:28.108680 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3338cb28-50b7-41c6-af36-ec2fb86fb949","Type":"ContainerStarted","Data":"b3daf127215e3f9b26a01d7ec6f8ef960b182796cb096bdeb35f2de80245820d"} Dec 12 04:51:28 crc kubenswrapper[4796]: I1212 04:51:28.152026 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.778511325 podStartE2EDuration="47.152013054s" podCreationTimestamp="2025-12-12 04:50:41 +0000 UTC" firstStartedPulling="2025-12-12 04:50:42.556463717 +0000 UTC m=+1033.432480864" lastFinishedPulling="2025-12-12 04:51:27.929965436 +0000 UTC m=+1078.805982593" observedRunningTime="2025-12-12 04:51:28.151714234 +0000 UTC m=+1079.027731381" watchObservedRunningTime="2025-12-12 04:51:28.152013054 +0000 UTC m=+1079.028030201" Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.118807 4796 generic.go:334] "Generic (PLEG): container finished" podID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerID="fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627" exitCode=0 Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.118897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" event={"ID":"778cd05f-f671-4ec9-b48f-97f9af5f848a","Type":"ContainerDied","Data":"fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627"} Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.121506 4796 generic.go:334] "Generic (PLEG): container finished" podID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerID="ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6" exitCode=0 Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.121535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" event={"ID":"32249fd3-e15b-4019-9518-aba16a1c74f3","Type":"ContainerDied","Data":"ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6"} Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.511221 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.569745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.850189 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 12 04:51:29 crc kubenswrapper[4796]: I1212 04:51:29.918242 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.129601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" event={"ID":"778cd05f-f671-4ec9-b48f-97f9af5f848a","Type":"ContainerStarted","Data":"458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4"} Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.130073 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.132467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" event={"ID":"32249fd3-e15b-4019-9518-aba16a1c74f3","Type":"ContainerStarted","Data":"ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb"} Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.133120 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.135364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3857d6ad-7515-4600-8e29-a5e3182f5253","Type":"ContainerStarted","Data":"5c960d282b73ccaf613f27942a36340b790558379e317bf72e18c3caea1f5d5e"} Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.135841 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.135946 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.150595 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" podStartSLOduration=4.150111526 podStartE2EDuration="54.150576813s" podCreationTimestamp="2025-12-12 04:50:36 +0000 UTC" firstStartedPulling="2025-12-12 04:50:37.92730421 +0000 UTC m=+1028.803321357" lastFinishedPulling="2025-12-12 04:51:27.927769477 +0000 UTC m=+1078.803786644" observedRunningTime="2025-12-12 04:51:30.147948411 +0000 UTC m=+1081.023965548" watchObservedRunningTime="2025-12-12 04:51:30.150576813 +0000 UTC m=+1081.026593960" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.171823 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.652753017 podStartE2EDuration="47.171798348s" podCreationTimestamp="2025-12-12 04:50:43 +0000 UTC" firstStartedPulling="2025-12-12 04:50:44.269503669 +0000 UTC m=+1035.145520806" lastFinishedPulling="2025-12-12 04:51:29.788549 +0000 UTC m=+1080.664566137" observedRunningTime="2025-12-12 04:51:30.163596651 +0000 UTC m=+1081.039613798" watchObservedRunningTime="2025-12-12 04:51:30.171798348 +0000 UTC m=+1081.047815495" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.180930 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.186336 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" podStartSLOduration=-9223371982.668465 podStartE2EDuration="54.186311043s" podCreationTimestamp="2025-12-12 04:50:36 +0000 UTC" firstStartedPulling="2025-12-12 04:50:37.725692098 +0000 UTC m=+1028.601709245" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:30.185217699 +0000 UTC m=+1081.061234846" watchObservedRunningTime="2025-12-12 04:51:30.186311043 +0000 UTC m=+1081.062328190" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.192953 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.412649 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2972q"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.450597 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-j9kgl"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.451816 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.453826 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.480716 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-j9kgl"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.535798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.535848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-config\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.535898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.535953 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9db\" (UniqueName: \"kubernetes.io/projected/b43be7fc-757c-46dc-9d41-4958f92ef3bf-kube-api-access-sz9db\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.637091 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-config\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.637157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.637216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz9db\" (UniqueName: \"kubernetes.io/projected/b43be7fc-757c-46dc-9d41-4958f92ef3bf-kube-api-access-sz9db\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.637261 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.638180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.638211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-config\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.638267 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.668363 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz9db\" (UniqueName: \"kubernetes.io/projected/b43be7fc-757c-46dc-9d41-4958f92ef3bf-kube-api-access-sz9db\") pod \"dnsmasq-dns-5bf47b49b7-j9kgl\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.703493 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-g7lfn"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.704408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.706673 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.729458 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g7lfn"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.740311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc1f12e-5104-4f56-ae2a-da52e2f60434-config\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.740393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2dc1f12e-5104-4f56-ae2a-da52e2f60434-ovn-rundir\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.740438 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc1f12e-5104-4f56-ae2a-da52e2f60434-combined-ca-bundle\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.740469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2dc1f12e-5104-4f56-ae2a-da52e2f60434-ovs-rundir\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.740514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpkz\" (UniqueName: \"kubernetes.io/projected/2dc1f12e-5104-4f56-ae2a-da52e2f60434-kube-api-access-djpkz\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.740596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc1f12e-5104-4f56-ae2a-da52e2f60434-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.773536 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.878312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc1f12e-5104-4f56-ae2a-da52e2f60434-config\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.878383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2dc1f12e-5104-4f56-ae2a-da52e2f60434-ovn-rundir\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.878413 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc1f12e-5104-4f56-ae2a-da52e2f60434-combined-ca-bundle\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.878441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2dc1f12e-5104-4f56-ae2a-da52e2f60434-ovs-rundir\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.878468 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djpkz\" (UniqueName: \"kubernetes.io/projected/2dc1f12e-5104-4f56-ae2a-da52e2f60434-kube-api-access-djpkz\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.878539 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc1f12e-5104-4f56-ae2a-da52e2f60434-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.888833 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.892440 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.894100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2dc1f12e-5104-4f56-ae2a-da52e2f60434-ovn-rundir\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.894164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2dc1f12e-5104-4f56-ae2a-da52e2f60434-ovs-rundir\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.894462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc1f12e-5104-4f56-ae2a-da52e2f60434-config\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.905179 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.905579 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-24945" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.905759 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.910933 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.916176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc1f12e-5104-4f56-ae2a-da52e2f60434-combined-ca-bundle\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.919870 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dc1f12e-5104-4f56-ae2a-da52e2f60434-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.926180 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.933012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpkz\" (UniqueName: \"kubernetes.io/projected/2dc1f12e-5104-4f56-ae2a-da52e2f60434-kube-api-access-djpkz\") pod \"ovn-controller-metrics-g7lfn\" (UID: \"2dc1f12e-5104-4f56-ae2a-da52e2f60434\") " pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.979390 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wbdbm"] Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981493 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981532 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n449m\" (UniqueName: \"kubernetes.io/projected/f3ad884c-e210-4b14-b98b-19d888c3886d-kube-api-access-n449m\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981554 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3ad884c-e210-4b14-b98b-19d888c3886d-scripts\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981584 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3ad884c-e210-4b14-b98b-19d888c3886d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981684 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:30 crc kubenswrapper[4796]: I1212 04:51:30.981727 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ad884c-e210-4b14-b98b-19d888c3886d-config\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.026947 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g7lfn" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.059470 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-cjsg7"] Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.060834 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.064984 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.081873 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cjsg7"] Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082486 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-config\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082550 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n449m\" (UniqueName: \"kubernetes.io/projected/f3ad884c-e210-4b14-b98b-19d888c3886d-kube-api-access-n449m\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3ad884c-e210-4b14-b98b-19d888c3886d-scripts\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082633 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082673 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3ad884c-e210-4b14-b98b-19d888c3886d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ad884c-e210-4b14-b98b-19d888c3886d-config\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082781 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7c6\" (UniqueName: \"kubernetes.io/projected/89bf0a84-aef6-435e-9334-038c98f04c82-kube-api-access-tx7c6\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.082858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-dns-svc\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.084099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3ad884c-e210-4b14-b98b-19d888c3886d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.085431 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3ad884c-e210-4b14-b98b-19d888c3886d-scripts\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.089303 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ad884c-e210-4b14-b98b-19d888c3886d-config\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.091196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.098983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.099064 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ad884c-e210-4b14-b98b-19d888c3886d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.111347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n449m\" (UniqueName: \"kubernetes.io/projected/f3ad884c-e210-4b14-b98b-19d888c3886d-kube-api-access-n449m\") pod \"ovn-northd-0\" (UID: \"f3ad884c-e210-4b14-b98b-19d888c3886d\") " pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.156513 4796 generic.go:334] "Generic (PLEG): container finished" podID="4b59263e-1bd8-4661-b612-2f4bc4f611f1" containerID="3401350b445e5934027a981214cb6096cec24a668cb70b017008d88afc04e014" exitCode=0 Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.157181 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4b59263e-1bd8-4661-b612-2f4bc4f611f1","Type":"ContainerDied","Data":"3401350b445e5934027a981214cb6096cec24a668cb70b017008d88afc04e014"} Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.183614 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7c6\" (UniqueName: \"kubernetes.io/projected/89bf0a84-aef6-435e-9334-038c98f04c82-kube-api-access-tx7c6\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.183660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-dns-svc\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.183738 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-config\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.183763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.183883 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.184651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-config\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.184746 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.185167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-dns-svc\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.185439 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.203404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7c6\" (UniqueName: \"kubernetes.io/projected/89bf0a84-aef6-435e-9334-038c98f04c82-kube-api-access-tx7c6\") pod \"dnsmasq-dns-8554648995-cjsg7\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.318647 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.419449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.521319 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-j9kgl"] Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.619157 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g7lfn"] Dec 12 04:51:31 crc kubenswrapper[4796]: W1212 04:51:31.636367 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc1f12e_5104_4f56_ae2a_da52e2f60434.slice/crio-e68cbc2cc8b3b32d39c29de47af7a6224bce65f790f09ed9bae656fd0884d367 WatchSource:0}: Error finding container e68cbc2cc8b3b32d39c29de47af7a6224bce65f790f09ed9bae656fd0884d367: Status 404 returned error can't find the container with id e68cbc2cc8b3b32d39c29de47af7a6224bce65f790f09ed9bae656fd0884d367 Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.798705 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 12 04:51:31 crc kubenswrapper[4796]: I1212 04:51:31.963228 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cjsg7"] Dec 12 04:51:31 crc kubenswrapper[4796]: W1212 04:51:31.977461 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89bf0a84_aef6_435e_9334_038c98f04c82.slice/crio-16d696f9b9f37bf01ef60a180da675f0a7280ebe6663e6ce5ad7de228d2c43dd WatchSource:0}: Error finding container 16d696f9b9f37bf01ef60a180da675f0a7280ebe6663e6ce5ad7de228d2c43dd: Status 404 returned error can't find the container with id 16d696f9b9f37bf01ef60a180da675f0a7280ebe6663e6ce5ad7de228d2c43dd Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.166166 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cjsg7" event={"ID":"89bf0a84-aef6-435e-9334-038c98f04c82","Type":"ContainerStarted","Data":"16d696f9b9f37bf01ef60a180da675f0a7280ebe6663e6ce5ad7de228d2c43dd"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.168733 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f3ad884c-e210-4b14-b98b-19d888c3886d","Type":"ContainerStarted","Data":"d1a886acd9f59d2794fda5fd3197ca8d5b12a5a74e51847655accd4c1525f96e"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.171955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4b59263e-1bd8-4661-b612-2f4bc4f611f1","Type":"ContainerStarted","Data":"9598aefe459b297629e4ece160584ffe76e7d9f48bb784d51021c830d1b8ffd8"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.173679 4796 generic.go:334] "Generic (PLEG): container finished" podID="3338cb28-50b7-41c6-af36-ec2fb86fb949" containerID="b3daf127215e3f9b26a01d7ec6f8ef960b182796cb096bdeb35f2de80245820d" exitCode=0 Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.173725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3338cb28-50b7-41c6-af36-ec2fb86fb949","Type":"ContainerDied","Data":"b3daf127215e3f9b26a01d7ec6f8ef960b182796cb096bdeb35f2de80245820d"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.176421 4796 generic.go:334] "Generic (PLEG): container finished" podID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerID="31321cc7f4cb89acb9d42c176c18df3593dba1b3a36e8f66df95a4e98675d874" exitCode=0 Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.176509 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" event={"ID":"b43be7fc-757c-46dc-9d41-4958f92ef3bf","Type":"ContainerDied","Data":"31321cc7f4cb89acb9d42c176c18df3593dba1b3a36e8f66df95a4e98675d874"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.176610 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" event={"ID":"b43be7fc-757c-46dc-9d41-4958f92ef3bf","Type":"ContainerStarted","Data":"8d5895c022e159765858f27c6bc6d65623389331feca8045379b64ca185debcf"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.178660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g7lfn" event={"ID":"2dc1f12e-5104-4f56-ae2a-da52e2f60434","Type":"ContainerStarted","Data":"b317f2d41421f5994a4bc928c31b23824cafa0c26a714406c3d561f5356fd101"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.178691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g7lfn" event={"ID":"2dc1f12e-5104-4f56-ae2a-da52e2f60434","Type":"ContainerStarted","Data":"e68cbc2cc8b3b32d39c29de47af7a6224bce65f790f09ed9bae656fd0884d367"} Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.178843 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerName="dnsmasq-dns" containerID="cri-o://ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb" gracePeriod=10 Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.179020 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerName="dnsmasq-dns" containerID="cri-o://458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4" gracePeriod=10 Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.223728 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.83831205 podStartE2EDuration="54.223703879s" podCreationTimestamp="2025-12-12 04:50:38 +0000 UTC" firstStartedPulling="2025-12-12 04:50:40.502010142 +0000 UTC m=+1031.378027289" lastFinishedPulling="2025-12-12 04:51:26.887401961 +0000 UTC m=+1077.763419118" observedRunningTime="2025-12-12 04:51:32.216587926 +0000 UTC m=+1083.092605073" watchObservedRunningTime="2025-12-12 04:51:32.223703879 +0000 UTC m=+1083.099721026" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.299690 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-g7lfn" podStartSLOduration=2.299662179 podStartE2EDuration="2.299662179s" podCreationTimestamp="2025-12-12 04:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:32.293459924 +0000 UTC m=+1083.169477071" watchObservedRunningTime="2025-12-12 04:51:32.299662179 +0000 UTC m=+1083.175679326" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.635491 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.721256 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hkth\" (UniqueName: \"kubernetes.io/projected/778cd05f-f671-4ec9-b48f-97f9af5f848a-kube-api-access-8hkth\") pod \"778cd05f-f671-4ec9-b48f-97f9af5f848a\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.721442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-config\") pod \"778cd05f-f671-4ec9-b48f-97f9af5f848a\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.721519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-dns-svc\") pod \"778cd05f-f671-4ec9-b48f-97f9af5f848a\" (UID: \"778cd05f-f671-4ec9-b48f-97f9af5f848a\") " Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.726942 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778cd05f-f671-4ec9-b48f-97f9af5f848a-kube-api-access-8hkth" (OuterVolumeSpecName: "kube-api-access-8hkth") pod "778cd05f-f671-4ec9-b48f-97f9af5f848a" (UID: "778cd05f-f671-4ec9-b48f-97f9af5f848a"). InnerVolumeSpecName "kube-api-access-8hkth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.774250 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "778cd05f-f671-4ec9-b48f-97f9af5f848a" (UID: "778cd05f-f671-4ec9-b48f-97f9af5f848a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.775952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-config" (OuterVolumeSpecName: "config") pod "778cd05f-f671-4ec9-b48f-97f9af5f848a" (UID: "778cd05f-f671-4ec9-b48f-97f9af5f848a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.823536 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.823567 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/778cd05f-f671-4ec9-b48f-97f9af5f848a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:32 crc kubenswrapper[4796]: I1212 04:51:32.823578 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hkth\" (UniqueName: \"kubernetes.io/projected/778cd05f-f671-4ec9-b48f-97f9af5f848a-kube-api-access-8hkth\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.117749 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.188532 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" event={"ID":"b43be7fc-757c-46dc-9d41-4958f92ef3bf","Type":"ContainerStarted","Data":"db2879f393a956285df382995f91aad5191eae6298e5392bd9fcea9aa8868ba8"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.190399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.193750 4796 generic.go:334] "Generic (PLEG): container finished" podID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerID="458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4" exitCode=0 Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.193794 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" event={"ID":"778cd05f-f671-4ec9-b48f-97f9af5f848a","Type":"ContainerDied","Data":"458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.193814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" event={"ID":"778cd05f-f671-4ec9-b48f-97f9af5f848a","Type":"ContainerDied","Data":"d23eabf9d6ec77d1dfbcec02167a505c92267bf950e93b854448f9757ea49e10"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.193828 4796 scope.go:117] "RemoveContainer" containerID="458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.193927 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2972q" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.228311 4796 generic.go:334] "Generic (PLEG): container finished" podID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerID="ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb" exitCode=0 Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.228949 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.228982 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" event={"ID":"32249fd3-e15b-4019-9518-aba16a1c74f3","Type":"ContainerDied","Data":"ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.229036 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wbdbm" event={"ID":"32249fd3-e15b-4019-9518-aba16a1c74f3","Type":"ContainerDied","Data":"adf28bc813bea565cdf477e81f1b6d5d1833795b91c38e14fa75204949da5e09"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.229906 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-dns-svc\") pod \"32249fd3-e15b-4019-9518-aba16a1c74f3\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.231059 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-config\") pod \"32249fd3-e15b-4019-9518-aba16a1c74f3\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.231182 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xmfj\" (UniqueName: \"kubernetes.io/projected/32249fd3-e15b-4019-9518-aba16a1c74f3-kube-api-access-8xmfj\") pod \"32249fd3-e15b-4019-9518-aba16a1c74f3\" (UID: \"32249fd3-e15b-4019-9518-aba16a1c74f3\") " Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.241941 4796 generic.go:334] "Generic (PLEG): container finished" podID="89bf0a84-aef6-435e-9334-038c98f04c82" containerID="46cab9999f2975de2b571bdda08c04fdf61223bc4363395c66c4329152cf994b" exitCode=0 Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.242069 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cjsg7" event={"ID":"89bf0a84-aef6-435e-9334-038c98f04c82","Type":"ContainerDied","Data":"46cab9999f2975de2b571bdda08c04fdf61223bc4363395c66c4329152cf994b"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.248924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32249fd3-e15b-4019-9518-aba16a1c74f3-kube-api-access-8xmfj" (OuterVolumeSpecName: "kube-api-access-8xmfj") pod "32249fd3-e15b-4019-9518-aba16a1c74f3" (UID: "32249fd3-e15b-4019-9518-aba16a1c74f3"). InnerVolumeSpecName "kube-api-access-8xmfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.259151 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" podStartSLOduration=3.259132852 podStartE2EDuration="3.259132852s" podCreationTimestamp="2025-12-12 04:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:33.215576227 +0000 UTC m=+1084.091593394" watchObservedRunningTime="2025-12-12 04:51:33.259132852 +0000 UTC m=+1084.135149999" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.262214 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2972q"] Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.268269 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2972q"] Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.273294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3338cb28-50b7-41c6-af36-ec2fb86fb949","Type":"ContainerStarted","Data":"c5d0be823ec900c8646b981a091e58a17c5610a7a92b3c70482865c1d1beb626"} Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.321270 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-config" (OuterVolumeSpecName: "config") pod "32249fd3-e15b-4019-9518-aba16a1c74f3" (UID: "32249fd3-e15b-4019-9518-aba16a1c74f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.321997 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371982.532797 podStartE2EDuration="54.321979461s" podCreationTimestamp="2025-12-12 04:50:39 +0000 UTC" firstStartedPulling="2025-12-12 04:50:41.963422582 +0000 UTC m=+1032.839439729" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:33.320918767 +0000 UTC m=+1084.196935914" watchObservedRunningTime="2025-12-12 04:51:33.321979461 +0000 UTC m=+1084.197996608" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.327002 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32249fd3-e15b-4019-9518-aba16a1c74f3" (UID: "32249fd3-e15b-4019-9518-aba16a1c74f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.357389 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.357416 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32249fd3-e15b-4019-9518-aba16a1c74f3-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.357425 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xmfj\" (UniqueName: \"kubernetes.io/projected/32249fd3-e15b-4019-9518-aba16a1c74f3-kube-api-access-8xmfj\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.421623 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" path="/var/lib/kubelet/pods/778cd05f-f671-4ec9-b48f-97f9af5f848a/volumes" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.557857 4796 scope.go:117] "RemoveContainer" containerID="fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.560048 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wbdbm"] Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.568738 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wbdbm"] Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.576041 4796 scope.go:117] "RemoveContainer" containerID="458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4" Dec 12 04:51:33 crc kubenswrapper[4796]: E1212 04:51:33.576850 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4\": container with ID starting with 458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4 not found: ID does not exist" containerID="458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.576898 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4"} err="failed to get container status \"458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4\": rpc error: code = NotFound desc = could not find container \"458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4\": container with ID starting with 458b4686a0823ca34a17f675aba33a7201be30dd4d0f9004ff43a2979680dcb4 not found: ID does not exist" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.576930 4796 scope.go:117] "RemoveContainer" containerID="fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627" Dec 12 04:51:33 crc kubenswrapper[4796]: E1212 04:51:33.577527 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627\": container with ID starting with fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627 not found: ID does not exist" containerID="fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.577557 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627"} err="failed to get container status \"fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627\": rpc error: code = NotFound desc = could not find container \"fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627\": container with ID starting with fd8f5fbb1d44be0229f137fd8f69739d0ac781837d308ac28cfb196ea4582627 not found: ID does not exist" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.577579 4796 scope.go:117] "RemoveContainer" containerID="ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.609560 4796 scope.go:117] "RemoveContainer" containerID="ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.633715 4796 scope.go:117] "RemoveContainer" containerID="ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb" Dec 12 04:51:33 crc kubenswrapper[4796]: E1212 04:51:33.634151 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb\": container with ID starting with ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb not found: ID does not exist" containerID="ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.634188 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb"} err="failed to get container status \"ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb\": rpc error: code = NotFound desc = could not find container \"ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb\": container with ID starting with ffbb3d7bd95d378558d3e8cc97148245ff20872aec51502d336f9d20124cd5eb not found: ID does not exist" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.634214 4796 scope.go:117] "RemoveContainer" containerID="ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6" Dec 12 04:51:33 crc kubenswrapper[4796]: E1212 04:51:33.634763 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6\": container with ID starting with ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6 not found: ID does not exist" containerID="ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6" Dec 12 04:51:33 crc kubenswrapper[4796]: I1212 04:51:33.634813 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6"} err="failed to get container status \"ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6\": rpc error: code = NotFound desc = could not find container \"ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6\": container with ID starting with ac58cd9f39895f728ea9cbb32b19aec006566a8bec7c87fb1d131b4d707637c6 not found: ID does not exist" Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.287317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cjsg7" event={"ID":"89bf0a84-aef6-435e-9334-038c98f04c82","Type":"ContainerStarted","Data":"ae3832fde56a3737fffc124308918d7f38425baa9a935c6365f0a5f287b44c4d"} Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.287916 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.289832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f3ad884c-e210-4b14-b98b-19d888c3886d","Type":"ContainerStarted","Data":"33d3d34f7c692a1c67b7e0ddbb12caf0d340626be5d3acf48ea9a3ff86bfa5bb"} Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.289881 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f3ad884c-e210-4b14-b98b-19d888c3886d","Type":"ContainerStarted","Data":"18dbfa5c105a6a61de3ce3f06a28c409d0a257aeedb7db094627041e0a727406"} Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.290064 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.312626 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-cjsg7" podStartSLOduration=4.312608109 podStartE2EDuration="4.312608109s" podCreationTimestamp="2025-12-12 04:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:34.306318712 +0000 UTC m=+1085.182335869" watchObservedRunningTime="2025-12-12 04:51:34.312608109 +0000 UTC m=+1085.188625246" Dec 12 04:51:34 crc kubenswrapper[4796]: I1212 04:51:34.351574 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.567845732 podStartE2EDuration="4.351524609s" podCreationTimestamp="2025-12-12 04:51:30 +0000 UTC" firstStartedPulling="2025-12-12 04:51:31.826962398 +0000 UTC m=+1082.702979535" lastFinishedPulling="2025-12-12 04:51:33.610641265 +0000 UTC m=+1084.486658412" observedRunningTime="2025-12-12 04:51:34.338359457 +0000 UTC m=+1085.214376604" watchObservedRunningTime="2025-12-12 04:51:34.351524609 +0000 UTC m=+1085.227541786" Dec 12 04:51:35 crc kubenswrapper[4796]: I1212 04:51:35.419784 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" path="/var/lib/kubelet/pods/32249fd3-e15b-4019-9518-aba16a1c74f3/volumes" Dec 12 04:51:36 crc kubenswrapper[4796]: I1212 04:51:36.733347 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 12 04:51:39 crc kubenswrapper[4796]: I1212 04:51:39.751046 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 12 04:51:39 crc kubenswrapper[4796]: I1212 04:51:39.751399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 12 04:51:39 crc kubenswrapper[4796]: I1212 04:51:39.815557 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 12 04:51:40 crc kubenswrapper[4796]: I1212 04:51:40.444705 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 12 04:51:40 crc kubenswrapper[4796]: I1212 04:51:40.776464 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.230445 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.230499 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.275408 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rdhff"] Dec 12 04:51:41 crc kubenswrapper[4796]: E1212 04:51:41.276010 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerName="dnsmasq-dns" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.276110 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerName="dnsmasq-dns" Dec 12 04:51:41 crc kubenswrapper[4796]: E1212 04:51:41.276191 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerName="init" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.276265 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerName="init" Dec 12 04:51:41 crc kubenswrapper[4796]: E1212 04:51:41.279932 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerName="dnsmasq-dns" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.280031 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerName="dnsmasq-dns" Dec 12 04:51:41 crc kubenswrapper[4796]: E1212 04:51:41.280142 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerName="init" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.280217 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerName="init" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.280608 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="32249fd3-e15b-4019-9518-aba16a1c74f3" containerName="dnsmasq-dns" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.280725 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="778cd05f-f671-4ec9-b48f-97f9af5f848a" containerName="dnsmasq-dns" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.281510 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.287717 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rdhff"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.375787 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7855-account-create-update-4pqr9"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.376800 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.380023 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.384946 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.385133 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7855-account-create-update-4pqr9"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.400215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f494e02-983f-4a3e-ada3-001cc79da003-operator-scripts\") pod \"keystone-db-create-rdhff\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.408003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2glb\" (UniqueName: \"kubernetes.io/projected/1f494e02-983f-4a3e-ada3-001cc79da003-kube-api-access-n2glb\") pod \"keystone-db-create-rdhff\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.425703 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.510129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2glb\" (UniqueName: \"kubernetes.io/projected/1f494e02-983f-4a3e-ada3-001cc79da003-kube-api-access-n2glb\") pod \"keystone-db-create-rdhff\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.510245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f494e02-983f-4a3e-ada3-001cc79da003-operator-scripts\") pod \"keystone-db-create-rdhff\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.510425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrb7\" (UniqueName: \"kubernetes.io/projected/8eeccb12-7501-4d76-b545-e0a667235668-kube-api-access-jvrb7\") pod \"keystone-7855-account-create-update-4pqr9\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.510492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eeccb12-7501-4d76-b545-e0a667235668-operator-scripts\") pod \"keystone-7855-account-create-update-4pqr9\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.512083 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f494e02-983f-4a3e-ada3-001cc79da003-operator-scripts\") pod \"keystone-db-create-rdhff\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.517651 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.526563 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-j9kgl"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.526841 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerName="dnsmasq-dns" containerID="cri-o://db2879f393a956285df382995f91aad5191eae6298e5392bd9fcea9aa8868ba8" gracePeriod=10 Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.571403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2glb\" (UniqueName: \"kubernetes.io/projected/1f494e02-983f-4a3e-ada3-001cc79da003-kube-api-access-n2glb\") pod \"keystone-db-create-rdhff\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.612249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrb7\" (UniqueName: \"kubernetes.io/projected/8eeccb12-7501-4d76-b545-e0a667235668-kube-api-access-jvrb7\") pod \"keystone-7855-account-create-update-4pqr9\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.612338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eeccb12-7501-4d76-b545-e0a667235668-operator-scripts\") pod \"keystone-7855-account-create-update-4pqr9\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.613578 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eeccb12-7501-4d76-b545-e0a667235668-operator-scripts\") pod \"keystone-7855-account-create-update-4pqr9\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.619331 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.658922 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrb7\" (UniqueName: \"kubernetes.io/projected/8eeccb12-7501-4d76-b545-e0a667235668-kube-api-access-jvrb7\") pod \"keystone-7855-account-create-update-4pqr9\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.700673 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.771557 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9j8zv"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.772718 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.790545 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9j8zv"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.878539 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bc07-account-create-update-h44pp"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.879650 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.884380 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.895242 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc07-account-create-update-h44pp"] Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.915926 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df906188-0fd2-43e8-8dca-16474c2ab546-operator-scripts\") pod \"placement-db-create-9j8zv\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:41 crc kubenswrapper[4796]: I1212 04:51:41.915978 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkz9\" (UniqueName: \"kubernetes.io/projected/df906188-0fd2-43e8-8dca-16474c2ab546-kube-api-access-rtkz9\") pod \"placement-db-create-9j8zv\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.017209 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7366c77a-ccc0-44d6-85cf-e8048e5daedc-operator-scripts\") pod \"placement-bc07-account-create-update-h44pp\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.019611 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df906188-0fd2-43e8-8dca-16474c2ab546-operator-scripts\") pod \"placement-db-create-9j8zv\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.019670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkz9\" (UniqueName: \"kubernetes.io/projected/df906188-0fd2-43e8-8dca-16474c2ab546-kube-api-access-rtkz9\") pod \"placement-db-create-9j8zv\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.019747 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rj2\" (UniqueName: \"kubernetes.io/projected/7366c77a-ccc0-44d6-85cf-e8048e5daedc-kube-api-access-v7rj2\") pod \"placement-bc07-account-create-update-h44pp\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.020522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df906188-0fd2-43e8-8dca-16474c2ab546-operator-scripts\") pod \"placement-db-create-9j8zv\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.038373 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkz9\" (UniqueName: \"kubernetes.io/projected/df906188-0fd2-43e8-8dca-16474c2ab546-kube-api-access-rtkz9\") pod \"placement-db-create-9j8zv\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.102690 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.120925 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7366c77a-ccc0-44d6-85cf-e8048e5daedc-operator-scripts\") pod \"placement-bc07-account-create-update-h44pp\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.121044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7rj2\" (UniqueName: \"kubernetes.io/projected/7366c77a-ccc0-44d6-85cf-e8048e5daedc-kube-api-access-v7rj2\") pod \"placement-bc07-account-create-update-h44pp\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.128338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7366c77a-ccc0-44d6-85cf-e8048e5daedc-operator-scripts\") pod \"placement-bc07-account-create-update-h44pp\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.138152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7rj2\" (UniqueName: \"kubernetes.io/projected/7366c77a-ccc0-44d6-85cf-e8048e5daedc-kube-api-access-v7rj2\") pod \"placement-bc07-account-create-update-h44pp\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.171430 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rdhff"] Dec 12 04:51:42 crc kubenswrapper[4796]: W1212 04:51:42.173015 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f494e02_983f_4a3e_ada3_001cc79da003.slice/crio-d1007f808ce2346083d5ff32ea988a2b33d1c4827883a746ff02e9439ca24c62 WatchSource:0}: Error finding container d1007f808ce2346083d5ff32ea988a2b33d1c4827883a746ff02e9439ca24c62: Status 404 returned error can't find the container with id d1007f808ce2346083d5ff32ea988a2b33d1c4827883a746ff02e9439ca24c62 Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.203832 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.279663 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7855-account-create-update-4pqr9"] Dec 12 04:51:42 crc kubenswrapper[4796]: W1212 04:51:42.353195 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eeccb12_7501_4d76_b545_e0a667235668.slice/crio-714858d2cd113d28bdcdfa55a70c2d0917e179ced37c699ad0775c16dd04888d WatchSource:0}: Error finding container 714858d2cd113d28bdcdfa55a70c2d0917e179ced37c699ad0775c16dd04888d: Status 404 returned error can't find the container with id 714858d2cd113d28bdcdfa55a70c2d0917e179ced37c699ad0775c16dd04888d Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.403699 4796 generic.go:334] "Generic (PLEG): container finished" podID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerID="db2879f393a956285df382995f91aad5191eae6298e5392bd9fcea9aa8868ba8" exitCode=0 Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.404054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" event={"ID":"b43be7fc-757c-46dc-9d41-4958f92ef3bf","Type":"ContainerDied","Data":"db2879f393a956285df382995f91aad5191eae6298e5392bd9fcea9aa8868ba8"} Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.452553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rdhff" event={"ID":"1f494e02-983f-4a3e-ada3-001cc79da003","Type":"ContainerStarted","Data":"d1007f808ce2346083d5ff32ea988a2b33d1c4827883a746ff02e9439ca24c62"} Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.527624 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9j8zv"] Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.769285 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.949107 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-dns-svc\") pod \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.949200 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-ovsdbserver-nb\") pod \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.949258 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz9db\" (UniqueName: \"kubernetes.io/projected/b43be7fc-757c-46dc-9d41-4958f92ef3bf-kube-api-access-sz9db\") pod \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.949399 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-config\") pod \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\" (UID: \"b43be7fc-757c-46dc-9d41-4958f92ef3bf\") " Dec 12 04:51:42 crc kubenswrapper[4796]: I1212 04:51:42.968220 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43be7fc-757c-46dc-9d41-4958f92ef3bf-kube-api-access-sz9db" (OuterVolumeSpecName: "kube-api-access-sz9db") pod "b43be7fc-757c-46dc-9d41-4958f92ef3bf" (UID: "b43be7fc-757c-46dc-9d41-4958f92ef3bf"). InnerVolumeSpecName "kube-api-access-sz9db". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.002230 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b43be7fc-757c-46dc-9d41-4958f92ef3bf" (UID: "b43be7fc-757c-46dc-9d41-4958f92ef3bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.020717 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b43be7fc-757c-46dc-9d41-4958f92ef3bf" (UID: "b43be7fc-757c-46dc-9d41-4958f92ef3bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.021705 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc07-account-create-update-h44pp"] Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.028594 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-config" (OuterVolumeSpecName: "config") pod "b43be7fc-757c-46dc-9d41-4958f92ef3bf" (UID: "b43be7fc-757c-46dc-9d41-4958f92ef3bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.051591 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.051625 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.051639 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz9db\" (UniqueName: \"kubernetes.io/projected/b43be7fc-757c-46dc-9d41-4958f92ef3bf-kube-api-access-sz9db\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.051651 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b43be7fc-757c-46dc-9d41-4958f92ef3bf-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.399969 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hcbxs"] Dec 12 04:51:43 crc kubenswrapper[4796]: E1212 04:51:43.400882 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerName="dnsmasq-dns" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.400905 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerName="dnsmasq-dns" Dec 12 04:51:43 crc kubenswrapper[4796]: E1212 04:51:43.400917 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerName="init" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.400927 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerName="init" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.401139 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" containerName="dnsmasq-dns" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.401937 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.445824 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hcbxs"] Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.496508 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.503673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" event={"ID":"b43be7fc-757c-46dc-9d41-4958f92ef3bf","Type":"ContainerDied","Data":"8d5895c022e159765858f27c6bc6d65623389331feca8045379b64ca185debcf"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.503730 4796 scope.go:117] "RemoveContainer" containerID="db2879f393a956285df382995f91aad5191eae6298e5392bd9fcea9aa8868ba8" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.503916 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-j9kgl" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.510587 4796 generic.go:334] "Generic (PLEG): container finished" podID="df906188-0fd2-43e8-8dca-16474c2ab546" containerID="84e41ed23ab2b9e0c16f858febcbbc7d61252dfbfe0bef02f950eb37d9f0c034" exitCode=0 Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.510659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9j8zv" event={"ID":"df906188-0fd2-43e8-8dca-16474c2ab546","Type":"ContainerDied","Data":"84e41ed23ab2b9e0c16f858febcbbc7d61252dfbfe0bef02f950eb37d9f0c034"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.510682 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9j8zv" event={"ID":"df906188-0fd2-43e8-8dca-16474c2ab546","Type":"ContainerStarted","Data":"db592495fabb8e78fcb8f12a25d72b73c537a2a0385d069b3d528efeab299040"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.512759 4796 generic.go:334] "Generic (PLEG): container finished" podID="8eeccb12-7501-4d76-b545-e0a667235668" containerID="744c84167243342353b0c775a602df649c82bec21d14427a85f9c01137155509" exitCode=0 Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.512806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7855-account-create-update-4pqr9" event={"ID":"8eeccb12-7501-4d76-b545-e0a667235668","Type":"ContainerDied","Data":"744c84167243342353b0c775a602df649c82bec21d14427a85f9c01137155509"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.512823 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7855-account-create-update-4pqr9" event={"ID":"8eeccb12-7501-4d76-b545-e0a667235668","Type":"ContainerStarted","Data":"714858d2cd113d28bdcdfa55a70c2d0917e179ced37c699ad0775c16dd04888d"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.523985 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc07-account-create-update-h44pp" event={"ID":"7366c77a-ccc0-44d6-85cf-e8048e5daedc","Type":"ContainerStarted","Data":"b71a8b954d3f45fde44743cffa4e43915bc88341bc01974f83dc4183f016fc5e"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.524034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc07-account-create-update-h44pp" event={"ID":"7366c77a-ccc0-44d6-85cf-e8048e5daedc","Type":"ContainerStarted","Data":"552c07192c7d491aa54c1ffc3ed8fa195917f2f2e1a369d46bb1eee6654c481f"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.538682 4796 generic.go:334] "Generic (PLEG): container finished" podID="1f494e02-983f-4a3e-ada3-001cc79da003" containerID="12778d9cfd36dfeda7807972e2f53f34edd8061dc3e8ad04b76d8bc5426c8e60" exitCode=0 Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.538742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rdhff" event={"ID":"1f494e02-983f-4a3e-ada3-001cc79da003","Type":"ContainerDied","Data":"12778d9cfd36dfeda7807972e2f53f34edd8061dc3e8ad04b76d8bc5426c8e60"} Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.562768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6sf\" (UniqueName: \"kubernetes.io/projected/ff3d9df9-a1b6-4474-9614-4d7535353752-kube-api-access-6x6sf\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.568581 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.568741 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-config\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.568807 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.568858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.585862 4796 scope.go:117] "RemoveContainer" containerID="31321cc7f4cb89acb9d42c176c18df3593dba1b3a36e8f66df95a4e98675d874" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.636176 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bc07-account-create-update-h44pp" podStartSLOduration=2.636155058 podStartE2EDuration="2.636155058s" podCreationTimestamp="2025-12-12 04:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:43.629078696 +0000 UTC m=+1094.505095853" watchObservedRunningTime="2025-12-12 04:51:43.636155058 +0000 UTC m=+1094.512172205" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.668309 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-j9kgl"] Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.672889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.672936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.673037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x6sf\" (UniqueName: \"kubernetes.io/projected/ff3d9df9-a1b6-4474-9614-4d7535353752-kube-api-access-6x6sf\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.673106 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.673168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-config\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.676792 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-config\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.676982 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.677751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.678338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.678374 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-j9kgl"] Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.700129 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x6sf\" (UniqueName: \"kubernetes.io/projected/ff3d9df9-a1b6-4474-9614-4d7535353752-kube-api-access-6x6sf\") pod \"dnsmasq-dns-b8fbc5445-hcbxs\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:43 crc kubenswrapper[4796]: I1212 04:51:43.723678 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.200847 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hcbxs"] Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.546306 4796 generic.go:334] "Generic (PLEG): container finished" podID="7366c77a-ccc0-44d6-85cf-e8048e5daedc" containerID="b71a8b954d3f45fde44743cffa4e43915bc88341bc01974f83dc4183f016fc5e" exitCode=0 Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.546378 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc07-account-create-update-h44pp" event={"ID":"7366c77a-ccc0-44d6-85cf-e8048e5daedc","Type":"ContainerDied","Data":"b71a8b954d3f45fde44743cffa4e43915bc88341bc01974f83dc4183f016fc5e"} Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.548556 4796 generic.go:334] "Generic (PLEG): container finished" podID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerID="2665b2177a2e60a86e6f4a6ec842868f413cdf8b2672ef27088867571605f6bf" exitCode=0 Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.548603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" event={"ID":"ff3d9df9-a1b6-4474-9614-4d7535353752","Type":"ContainerDied","Data":"2665b2177a2e60a86e6f4a6ec842868f413cdf8b2672ef27088867571605f6bf"} Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.548654 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" event={"ID":"ff3d9df9-a1b6-4474-9614-4d7535353752","Type":"ContainerStarted","Data":"08020f41c9b3d74b6d6a4dd8b33f4c26457687397fe6180ec6c97196919bfccb"} Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.612121 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.621715 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.626023 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.626196 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.626328 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xljz6" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.626444 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.642632 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.690268 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/09626c7b-0eba-4fe5-9598-ac562516cb98-cache\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.690351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.690384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8x64\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-kube-api-access-s8x64\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.690403 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.690420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/09626c7b-0eba-4fe5-9598-ac562516cb98-lock\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.799445 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/09626c7b-0eba-4fe5-9598-ac562516cb98-cache\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.799547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.799603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8x64\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-kube-api-access-s8x64\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.799667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.799699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/09626c7b-0eba-4fe5-9598-ac562516cb98-lock\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.800398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/09626c7b-0eba-4fe5-9598-ac562516cb98-lock\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.800708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/09626c7b-0eba-4fe5-9598-ac562516cb98-cache\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.801018 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: E1212 04:51:44.801450 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 04:51:44 crc kubenswrapper[4796]: E1212 04:51:44.801489 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 04:51:44 crc kubenswrapper[4796]: E1212 04:51:44.801559 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift podName:09626c7b-0eba-4fe5-9598-ac562516cb98 nodeName:}" failed. No retries permitted until 2025-12-12 04:51:45.301538223 +0000 UTC m=+1096.177555380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift") pod "swift-storage-0" (UID: "09626c7b-0eba-4fe5-9598-ac562516cb98") : configmap "swift-ring-files" not found Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.831699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8x64\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-kube-api-access-s8x64\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.867987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:44 crc kubenswrapper[4796]: I1212 04:51:44.954725 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.056617 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.063093 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.109683 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvrb7\" (UniqueName: \"kubernetes.io/projected/8eeccb12-7501-4d76-b545-e0a667235668-kube-api-access-jvrb7\") pod \"8eeccb12-7501-4d76-b545-e0a667235668\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.109775 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eeccb12-7501-4d76-b545-e0a667235668-operator-scripts\") pod \"8eeccb12-7501-4d76-b545-e0a667235668\" (UID: \"8eeccb12-7501-4d76-b545-e0a667235668\") " Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.110311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeccb12-7501-4d76-b545-e0a667235668-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eeccb12-7501-4d76-b545-e0a667235668" (UID: "8eeccb12-7501-4d76-b545-e0a667235668"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.113311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeccb12-7501-4d76-b545-e0a667235668-kube-api-access-jvrb7" (OuterVolumeSpecName: "kube-api-access-jvrb7") pod "8eeccb12-7501-4d76-b545-e0a667235668" (UID: "8eeccb12-7501-4d76-b545-e0a667235668"). InnerVolumeSpecName "kube-api-access-jvrb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.211661 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2glb\" (UniqueName: \"kubernetes.io/projected/1f494e02-983f-4a3e-ada3-001cc79da003-kube-api-access-n2glb\") pod \"1f494e02-983f-4a3e-ada3-001cc79da003\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.211759 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtkz9\" (UniqueName: \"kubernetes.io/projected/df906188-0fd2-43e8-8dca-16474c2ab546-kube-api-access-rtkz9\") pod \"df906188-0fd2-43e8-8dca-16474c2ab546\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.211919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df906188-0fd2-43e8-8dca-16474c2ab546-operator-scripts\") pod \"df906188-0fd2-43e8-8dca-16474c2ab546\" (UID: \"df906188-0fd2-43e8-8dca-16474c2ab546\") " Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.211971 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f494e02-983f-4a3e-ada3-001cc79da003-operator-scripts\") pod \"1f494e02-983f-4a3e-ada3-001cc79da003\" (UID: \"1f494e02-983f-4a3e-ada3-001cc79da003\") " Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.212332 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eeccb12-7501-4d76-b545-e0a667235668-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.212348 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvrb7\" (UniqueName: \"kubernetes.io/projected/8eeccb12-7501-4d76-b545-e0a667235668-kube-api-access-jvrb7\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.212938 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df906188-0fd2-43e8-8dca-16474c2ab546-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df906188-0fd2-43e8-8dca-16474c2ab546" (UID: "df906188-0fd2-43e8-8dca-16474c2ab546"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.213016 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f494e02-983f-4a3e-ada3-001cc79da003-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f494e02-983f-4a3e-ada3-001cc79da003" (UID: "1f494e02-983f-4a3e-ada3-001cc79da003"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.215510 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df906188-0fd2-43e8-8dca-16474c2ab546-kube-api-access-rtkz9" (OuterVolumeSpecName: "kube-api-access-rtkz9") pod "df906188-0fd2-43e8-8dca-16474c2ab546" (UID: "df906188-0fd2-43e8-8dca-16474c2ab546"). InnerVolumeSpecName "kube-api-access-rtkz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.217924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f494e02-983f-4a3e-ada3-001cc79da003-kube-api-access-n2glb" (OuterVolumeSpecName: "kube-api-access-n2glb") pod "1f494e02-983f-4a3e-ada3-001cc79da003" (UID: "1f494e02-983f-4a3e-ada3-001cc79da003"). InnerVolumeSpecName "kube-api-access-n2glb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.313952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:45 crc kubenswrapper[4796]: E1212 04:51:45.314159 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 04:51:45 crc kubenswrapper[4796]: E1212 04:51:45.314188 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.314192 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df906188-0fd2-43e8-8dca-16474c2ab546-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.314220 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f494e02-983f-4a3e-ada3-001cc79da003-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:45 crc kubenswrapper[4796]: E1212 04:51:45.314248 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift podName:09626c7b-0eba-4fe5-9598-ac562516cb98 nodeName:}" failed. No retries permitted until 2025-12-12 04:51:46.314230206 +0000 UTC m=+1097.190247353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift") pod "swift-storage-0" (UID: "09626c7b-0eba-4fe5-9598-ac562516cb98") : configmap "swift-ring-files" not found Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.314276 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2glb\" (UniqueName: \"kubernetes.io/projected/1f494e02-983f-4a3e-ada3-001cc79da003-kube-api-access-n2glb\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.314303 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtkz9\" (UniqueName: \"kubernetes.io/projected/df906188-0fd2-43e8-8dca-16474c2ab546-kube-api-access-rtkz9\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.421440 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43be7fc-757c-46dc-9d41-4958f92ef3bf" path="/var/lib/kubelet/pods/b43be7fc-757c-46dc-9d41-4958f92ef3bf/volumes" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.559912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9j8zv" event={"ID":"df906188-0fd2-43e8-8dca-16474c2ab546","Type":"ContainerDied","Data":"db592495fabb8e78fcb8f12a25d72b73c537a2a0385d069b3d528efeab299040"} Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.559961 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db592495fabb8e78fcb8f12a25d72b73c537a2a0385d069b3d528efeab299040" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.559993 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9j8zv" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.562102 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7855-account-create-update-4pqr9" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.561999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7855-account-create-update-4pqr9" event={"ID":"8eeccb12-7501-4d76-b545-e0a667235668","Type":"ContainerDied","Data":"714858d2cd113d28bdcdfa55a70c2d0917e179ced37c699ad0775c16dd04888d"} Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.562526 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714858d2cd113d28bdcdfa55a70c2d0917e179ced37c699ad0775c16dd04888d" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.564501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rdhff" event={"ID":"1f494e02-983f-4a3e-ada3-001cc79da003","Type":"ContainerDied","Data":"d1007f808ce2346083d5ff32ea988a2b33d1c4827883a746ff02e9439ca24c62"} Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.564533 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1007f808ce2346083d5ff32ea988a2b33d1c4827883a746ff02e9439ca24c62" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.564590 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rdhff" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.568776 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" event={"ID":"ff3d9df9-a1b6-4474-9614-4d7535353752","Type":"ContainerStarted","Data":"c50194b2a3c997e56a9df9912a8060d2ba5dd33d577e0e56f87de3b069b4e284"} Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.594128 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podStartSLOduration=2.594111086 podStartE2EDuration="2.594111086s" podCreationTimestamp="2025-12-12 04:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:51:45.59297714 +0000 UTC m=+1096.468994307" watchObservedRunningTime="2025-12-12 04:51:45.594111086 +0000 UTC m=+1096.470128243" Dec 12 04:51:45 crc kubenswrapper[4796]: I1212 04:51:45.877698 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.026558 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7rj2\" (UniqueName: \"kubernetes.io/projected/7366c77a-ccc0-44d6-85cf-e8048e5daedc-kube-api-access-v7rj2\") pod \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.026818 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7366c77a-ccc0-44d6-85cf-e8048e5daedc-operator-scripts\") pod \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\" (UID: \"7366c77a-ccc0-44d6-85cf-e8048e5daedc\") " Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.028189 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7366c77a-ccc0-44d6-85cf-e8048e5daedc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7366c77a-ccc0-44d6-85cf-e8048e5daedc" (UID: "7366c77a-ccc0-44d6-85cf-e8048e5daedc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.032836 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7366c77a-ccc0-44d6-85cf-e8048e5daedc-kube-api-access-v7rj2" (OuterVolumeSpecName: "kube-api-access-v7rj2") pod "7366c77a-ccc0-44d6-85cf-e8048e5daedc" (UID: "7366c77a-ccc0-44d6-85cf-e8048e5daedc"). InnerVolumeSpecName "kube-api-access-v7rj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.128111 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7366c77a-ccc0-44d6-85cf-e8048e5daedc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.128147 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7rj2\" (UniqueName: \"kubernetes.io/projected/7366c77a-ccc0-44d6-85cf-e8048e5daedc-kube-api-access-v7rj2\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.330821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.331096 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.331113 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.331171 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift podName:09626c7b-0eba-4fe5-9598-ac562516cb98 nodeName:}" failed. No retries permitted until 2025-12-12 04:51:48.331154029 +0000 UTC m=+1099.207171196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift") pod "swift-storage-0" (UID: "09626c7b-0eba-4fe5-9598-ac562516cb98") : configmap "swift-ring-files" not found Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.413773 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.577041 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc07-account-create-update-h44pp" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.577032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc07-account-create-update-h44pp" event={"ID":"7366c77a-ccc0-44d6-85cf-e8048e5daedc","Type":"ContainerDied","Data":"552c07192c7d491aa54c1ffc3ed8fa195917f2f2e1a369d46bb1eee6654c481f"} Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.577526 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552c07192c7d491aa54c1ffc3ed8fa195917f2f2e1a369d46bb1eee6654c481f" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.577549 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903304 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pqzxn"] Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.903653 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df906188-0fd2-43e8-8dca-16474c2ab546" containerName="mariadb-database-create" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903669 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="df906188-0fd2-43e8-8dca-16474c2ab546" containerName="mariadb-database-create" Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.903686 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeccb12-7501-4d76-b545-e0a667235668" containerName="mariadb-account-create-update" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903692 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeccb12-7501-4d76-b545-e0a667235668" containerName="mariadb-account-create-update" Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.903700 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7366c77a-ccc0-44d6-85cf-e8048e5daedc" containerName="mariadb-account-create-update" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903706 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7366c77a-ccc0-44d6-85cf-e8048e5daedc" containerName="mariadb-account-create-update" Dec 12 04:51:46 crc kubenswrapper[4796]: E1212 04:51:46.903720 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f494e02-983f-4a3e-ada3-001cc79da003" containerName="mariadb-database-create" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903726 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f494e02-983f-4a3e-ada3-001cc79da003" containerName="mariadb-database-create" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903865 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7366c77a-ccc0-44d6-85cf-e8048e5daedc" containerName="mariadb-account-create-update" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903874 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f494e02-983f-4a3e-ada3-001cc79da003" containerName="mariadb-database-create" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903884 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeccb12-7501-4d76-b545-e0a667235668" containerName="mariadb-account-create-update" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.903930 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="df906188-0fd2-43e8-8dca-16474c2ab546" containerName="mariadb-database-create" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.904454 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:46 crc kubenswrapper[4796]: I1212 04:51:46.920020 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pqzxn"] Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.051727 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85e0850-2c85-40bf-bb1a-03843b97b6ac-operator-scripts\") pod \"glance-db-create-pqzxn\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.051824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxl2\" (UniqueName: \"kubernetes.io/projected/f85e0850-2c85-40bf-bb1a-03843b97b6ac-kube-api-access-zrxl2\") pod \"glance-db-create-pqzxn\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.107562 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bee9-account-create-update-cb5v4"] Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.108774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.111505 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.115556 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bee9-account-create-update-cb5v4"] Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.153095 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrxl2\" (UniqueName: \"kubernetes.io/projected/f85e0850-2c85-40bf-bb1a-03843b97b6ac-kube-api-access-zrxl2\") pod \"glance-db-create-pqzxn\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.153227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85e0850-2c85-40bf-bb1a-03843b97b6ac-operator-scripts\") pod \"glance-db-create-pqzxn\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.154062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85e0850-2c85-40bf-bb1a-03843b97b6ac-operator-scripts\") pod \"glance-db-create-pqzxn\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.172386 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrxl2\" (UniqueName: \"kubernetes.io/projected/f85e0850-2c85-40bf-bb1a-03843b97b6ac-kube-api-access-zrxl2\") pod \"glance-db-create-pqzxn\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.223512 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.254865 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57685147-1da1-474a-ab9d-b33e05450527-operator-scripts\") pod \"glance-bee9-account-create-update-cb5v4\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.254966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999dt\" (UniqueName: \"kubernetes.io/projected/57685147-1da1-474a-ab9d-b33e05450527-kube-api-access-999dt\") pod \"glance-bee9-account-create-update-cb5v4\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.355962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57685147-1da1-474a-ab9d-b33e05450527-operator-scripts\") pod \"glance-bee9-account-create-update-cb5v4\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.356465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999dt\" (UniqueName: \"kubernetes.io/projected/57685147-1da1-474a-ab9d-b33e05450527-kube-api-access-999dt\") pod \"glance-bee9-account-create-update-cb5v4\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.356704 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57685147-1da1-474a-ab9d-b33e05450527-operator-scripts\") pod \"glance-bee9-account-create-update-cb5v4\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.374825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999dt\" (UniqueName: \"kubernetes.io/projected/57685147-1da1-474a-ab9d-b33e05450527-kube-api-access-999dt\") pod \"glance-bee9-account-create-update-cb5v4\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.422826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.672314 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pqzxn"] Dec 12 04:51:47 crc kubenswrapper[4796]: I1212 04:51:47.886042 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bee9-account-create-update-cb5v4"] Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.382959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:48 crc kubenswrapper[4796]: E1212 04:51:48.383215 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 04:51:48 crc kubenswrapper[4796]: E1212 04:51:48.383232 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 04:51:48 crc kubenswrapper[4796]: E1212 04:51:48.383310 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift podName:09626c7b-0eba-4fe5-9598-ac562516cb98 nodeName:}" failed. No retries permitted until 2025-12-12 04:51:52.383266606 +0000 UTC m=+1103.259283753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift") pod "swift-storage-0" (UID: "09626c7b-0eba-4fe5-9598-ac562516cb98") : configmap "swift-ring-files" not found Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.496995 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-x6qjf"] Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.498230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.502472 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.503247 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.505578 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x6qjf"] Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.507579 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586022 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-combined-ca-bundle\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586069 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-swiftconf\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586095 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db25cf2d-5a36-4289-b5d2-3a156acaee44-etc-swift\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-ring-data-devices\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-scripts\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586188 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4mh\" (UniqueName: \"kubernetes.io/projected/db25cf2d-5a36-4289-b5d2-3a156acaee44-kube-api-access-mn4mh\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.586202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-dispersionconf\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.603193 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pqzxn" event={"ID":"f85e0850-2c85-40bf-bb1a-03843b97b6ac","Type":"ContainerStarted","Data":"e3c309ef4688d132960edba50910b343a21bc43f8efe82163d93b682ea63f19e"} Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.604212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bee9-account-create-update-cb5v4" event={"ID":"57685147-1da1-474a-ab9d-b33e05450527","Type":"ContainerStarted","Data":"e56eb5f1b21fc2f7141d095d3ba8f4e35b6c0cae13c9b877b2decd957fef714d"} Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.687636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-combined-ca-bundle\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.688576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-swiftconf\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.688608 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db25cf2d-5a36-4289-b5d2-3a156acaee44-etc-swift\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.688642 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-ring-data-devices\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.688704 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-scripts\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.688722 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4mh\" (UniqueName: \"kubernetes.io/projected/db25cf2d-5a36-4289-b5d2-3a156acaee44-kube-api-access-mn4mh\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.688739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-dispersionconf\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.689023 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db25cf2d-5a36-4289-b5d2-3a156acaee44-etc-swift\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.689619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-scripts\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.689630 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-ring-data-devices\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.695850 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-dispersionconf\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.696095 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-combined-ca-bundle\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.696718 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-swiftconf\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.711384 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4mh\" (UniqueName: \"kubernetes.io/projected/db25cf2d-5a36-4289-b5d2-3a156acaee44-kube-api-access-mn4mh\") pod \"swift-ring-rebalance-x6qjf\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:48 crc kubenswrapper[4796]: I1212 04:51:48.834740 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:51:49 crc kubenswrapper[4796]: I1212 04:51:49.080025 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x6qjf"] Dec 12 04:51:49 crc kubenswrapper[4796]: I1212 04:51:49.615494 4796 generic.go:334] "Generic (PLEG): container finished" podID="f85e0850-2c85-40bf-bb1a-03843b97b6ac" containerID="65301d5ecc69ad877b7a1553751e7cc537d6b7a6bfd49f9bb45595ef814fc021" exitCode=0 Dec 12 04:51:49 crc kubenswrapper[4796]: I1212 04:51:49.615550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pqzxn" event={"ID":"f85e0850-2c85-40bf-bb1a-03843b97b6ac","Type":"ContainerDied","Data":"65301d5ecc69ad877b7a1553751e7cc537d6b7a6bfd49f9bb45595ef814fc021"} Dec 12 04:51:49 crc kubenswrapper[4796]: I1212 04:51:49.617770 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x6qjf" event={"ID":"db25cf2d-5a36-4289-b5d2-3a156acaee44","Type":"ContainerStarted","Data":"bce7702cfc9cde3cbf215dd0e528ba4bf2d5171538c6b8fd53a72c6990b4c92d"} Dec 12 04:51:49 crc kubenswrapper[4796]: I1212 04:51:49.619880 4796 generic.go:334] "Generic (PLEG): container finished" podID="57685147-1da1-474a-ab9d-b33e05450527" containerID="27a95856ba385b01fca08f9337c67999c04567f3e1835f8c24770aadf31176bf" exitCode=0 Dec 12 04:51:49 crc kubenswrapper[4796]: I1212 04:51:49.619907 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bee9-account-create-update-cb5v4" event={"ID":"57685147-1da1-474a-ab9d-b33e05450527","Type":"ContainerDied","Data":"27a95856ba385b01fca08f9337c67999c04567f3e1835f8c24770aadf31176bf"} Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.031748 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.053888 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.055427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-999dt\" (UniqueName: \"kubernetes.io/projected/57685147-1da1-474a-ab9d-b33e05450527-kube-api-access-999dt\") pod \"57685147-1da1-474a-ab9d-b33e05450527\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.055468 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85e0850-2c85-40bf-bb1a-03843b97b6ac-operator-scripts\") pod \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.055557 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrxl2\" (UniqueName: \"kubernetes.io/projected/f85e0850-2c85-40bf-bb1a-03843b97b6ac-kube-api-access-zrxl2\") pod \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\" (UID: \"f85e0850-2c85-40bf-bb1a-03843b97b6ac\") " Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.055614 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57685147-1da1-474a-ab9d-b33e05450527-operator-scripts\") pod \"57685147-1da1-474a-ab9d-b33e05450527\" (UID: \"57685147-1da1-474a-ab9d-b33e05450527\") " Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.056963 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57685147-1da1-474a-ab9d-b33e05450527-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57685147-1da1-474a-ab9d-b33e05450527" (UID: "57685147-1da1-474a-ab9d-b33e05450527"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.057025 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85e0850-2c85-40bf-bb1a-03843b97b6ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f85e0850-2c85-40bf-bb1a-03843b97b6ac" (UID: "f85e0850-2c85-40bf-bb1a-03843b97b6ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.064164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85e0850-2c85-40bf-bb1a-03843b97b6ac-kube-api-access-zrxl2" (OuterVolumeSpecName: "kube-api-access-zrxl2") pod "f85e0850-2c85-40bf-bb1a-03843b97b6ac" (UID: "f85e0850-2c85-40bf-bb1a-03843b97b6ac"). InnerVolumeSpecName "kube-api-access-zrxl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.069703 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57685147-1da1-474a-ab9d-b33e05450527-kube-api-access-999dt" (OuterVolumeSpecName: "kube-api-access-999dt") pod "57685147-1da1-474a-ab9d-b33e05450527" (UID: "57685147-1da1-474a-ab9d-b33e05450527"). InnerVolumeSpecName "kube-api-access-999dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.157173 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrxl2\" (UniqueName: \"kubernetes.io/projected/f85e0850-2c85-40bf-bb1a-03843b97b6ac-kube-api-access-zrxl2\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.157456 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57685147-1da1-474a-ab9d-b33e05450527-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.157471 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-999dt\" (UniqueName: \"kubernetes.io/projected/57685147-1da1-474a-ab9d-b33e05450527-kube-api-access-999dt\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.157482 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85e0850-2c85-40bf-bb1a-03843b97b6ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.637695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pqzxn" event={"ID":"f85e0850-2c85-40bf-bb1a-03843b97b6ac","Type":"ContainerDied","Data":"e3c309ef4688d132960edba50910b343a21bc43f8efe82163d93b682ea63f19e"} Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.637738 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c309ef4688d132960edba50910b343a21bc43f8efe82163d93b682ea63f19e" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.637759 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pqzxn" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.640763 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bee9-account-create-update-cb5v4" Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.641407 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bee9-account-create-update-cb5v4" event={"ID":"57685147-1da1-474a-ab9d-b33e05450527","Type":"ContainerDied","Data":"e56eb5f1b21fc2f7141d095d3ba8f4e35b6c0cae13c9b877b2decd957fef714d"} Dec 12 04:51:51 crc kubenswrapper[4796]: I1212 04:51:51.641511 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56eb5f1b21fc2f7141d095d3ba8f4e35b6c0cae13c9b877b2decd957fef714d" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.003117 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ml9sj" podUID="0751eb6e-3452-4b8d-abfa-d37121e1a03e" containerName="ovn-controller" probeResult="failure" output=< Dec 12 04:51:52 crc kubenswrapper[4796]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 12 04:51:52 crc kubenswrapper[4796]: > Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.083165 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.123146 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9xcn6" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.341162 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ml9sj-config-lgl5b"] Dec 12 04:51:52 crc kubenswrapper[4796]: E1212 04:51:52.341873 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e0850-2c85-40bf-bb1a-03843b97b6ac" containerName="mariadb-database-create" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.341895 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e0850-2c85-40bf-bb1a-03843b97b6ac" containerName="mariadb-database-create" Dec 12 04:51:52 crc kubenswrapper[4796]: E1212 04:51:52.341936 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57685147-1da1-474a-ab9d-b33e05450527" containerName="mariadb-account-create-update" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.341946 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="57685147-1da1-474a-ab9d-b33e05450527" containerName="mariadb-account-create-update" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.342148 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e0850-2c85-40bf-bb1a-03843b97b6ac" containerName="mariadb-database-create" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.342181 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="57685147-1da1-474a-ab9d-b33e05450527" containerName="mariadb-account-create-update" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.342883 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.345975 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.369322 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ml9sj-config-lgl5b"] Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-additional-scripts\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbnf\" (UniqueName: \"kubernetes.io/projected/353370ea-80af-4056-adb5-491a3a1c1cb8-kube-api-access-cmbnf\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477520 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477620 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-log-ovn\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-scripts\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.477745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run-ovn\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: E1212 04:51:52.477765 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 04:51:52 crc kubenswrapper[4796]: E1212 04:51:52.477855 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 04:51:52 crc kubenswrapper[4796]: E1212 04:51:52.477912 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift podName:09626c7b-0eba-4fe5-9598-ac562516cb98 nodeName:}" failed. No retries permitted until 2025-12-12 04:52:00.477893544 +0000 UTC m=+1111.353910771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift") pod "swift-storage-0" (UID: "09626c7b-0eba-4fe5-9598-ac562516cb98") : configmap "swift-ring-files" not found Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579431 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-log-ovn\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579501 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-scripts\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579548 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run-ovn\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579582 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-additional-scripts\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579696 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbnf\" (UniqueName: \"kubernetes.io/projected/353370ea-80af-4056-adb5-491a3a1c1cb8-kube-api-access-cmbnf\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run-ovn\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-log-ovn\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.579836 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.581266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-additional-scripts\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.581749 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-scripts\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.609093 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbnf\" (UniqueName: \"kubernetes.io/projected/353370ea-80af-4056-adb5-491a3a1c1cb8-kube-api-access-cmbnf\") pod \"ovn-controller-ml9sj-config-lgl5b\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.650247 4796 generic.go:334] "Generic (PLEG): container finished" podID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerID="0638c4b39def9c37b4ed634dc7f7190e375875f7342113d40c4cbff5aad06f38" exitCode=0 Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.650337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0ec4e97-93b3-46f0-9b09-76c22a3ed215","Type":"ContainerDied","Data":"0638c4b39def9c37b4ed634dc7f7190e375875f7342113d40c4cbff5aad06f38"} Dec 12 04:51:52 crc kubenswrapper[4796]: I1212 04:51:52.664686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.659699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x6qjf" event={"ID":"db25cf2d-5a36-4289-b5d2-3a156acaee44","Type":"ContainerStarted","Data":"3dd035a4f13b03439ee33be47c07ea9b941e3cc376154afdd070fc1238b470b9"} Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.662964 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0ec4e97-93b3-46f0-9b09-76c22a3ed215","Type":"ContainerStarted","Data":"e9a38a9ac495bc917495ac896f1a02cd876fb20265c786e3038568aaa054b1fa"} Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.663353 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.681912 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-x6qjf" podStartSLOduration=1.433421855 podStartE2EDuration="5.681890214s" podCreationTimestamp="2025-12-12 04:51:48 +0000 UTC" firstStartedPulling="2025-12-12 04:51:49.087865564 +0000 UTC m=+1099.963882711" lastFinishedPulling="2025-12-12 04:51:53.336333903 +0000 UTC m=+1104.212351070" observedRunningTime="2025-12-12 04:51:53.675567095 +0000 UTC m=+1104.551584242" watchObservedRunningTime="2025-12-12 04:51:53.681890214 +0000 UTC m=+1104.557907381" Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.711207 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.845593571 podStartE2EDuration="1m16.711187212s" podCreationTimestamp="2025-12-12 04:50:37 +0000 UTC" firstStartedPulling="2025-12-12 04:50:38.840799488 +0000 UTC m=+1029.716816635" lastFinishedPulling="2025-12-12 04:51:18.706393129 +0000 UTC m=+1069.582410276" observedRunningTime="2025-12-12 04:51:53.704125001 +0000 UTC m=+1104.580142148" watchObservedRunningTime="2025-12-12 04:51:53.711187212 +0000 UTC m=+1104.587204359" Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.724452 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.787486 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cjsg7"] Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.788045 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-cjsg7" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" containerName="dnsmasq-dns" containerID="cri-o://ae3832fde56a3737fffc124308918d7f38425baa9a935c6365f0a5f287b44c4d" gracePeriod=10 Dec 12 04:51:53 crc kubenswrapper[4796]: I1212 04:51:53.863408 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ml9sj-config-lgl5b"] Dec 12 04:51:53 crc kubenswrapper[4796]: W1212 04:51:53.880497 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353370ea_80af_4056_adb5_491a3a1c1cb8.slice/crio-cb2837c725b002b0e3842a1b7d587a069d77fed6edad4f0080f4ddcc0f60c700 WatchSource:0}: Error finding container cb2837c725b002b0e3842a1b7d587a069d77fed6edad4f0080f4ddcc0f60c700: Status 404 returned error can't find the container with id cb2837c725b002b0e3842a1b7d587a069d77fed6edad4f0080f4ddcc0f60c700 Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.677115 4796 generic.go:334] "Generic (PLEG): container finished" podID="353370ea-80af-4056-adb5-491a3a1c1cb8" containerID="c4da76d45ace94386b384f4a759ac4788636265db5d21574e1d23b6c8ee53ac6" exitCode=0 Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.677600 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ml9sj-config-lgl5b" event={"ID":"353370ea-80af-4056-adb5-491a3a1c1cb8","Type":"ContainerDied","Data":"c4da76d45ace94386b384f4a759ac4788636265db5d21574e1d23b6c8ee53ac6"} Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.677626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ml9sj-config-lgl5b" event={"ID":"353370ea-80af-4056-adb5-491a3a1c1cb8","Type":"ContainerStarted","Data":"cb2837c725b002b0e3842a1b7d587a069d77fed6edad4f0080f4ddcc0f60c700"} Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.690933 4796 generic.go:334] "Generic (PLEG): container finished" podID="89bf0a84-aef6-435e-9334-038c98f04c82" containerID="ae3832fde56a3737fffc124308918d7f38425baa9a935c6365f0a5f287b44c4d" exitCode=0 Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.691781 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cjsg7" event={"ID":"89bf0a84-aef6-435e-9334-038c98f04c82","Type":"ContainerDied","Data":"ae3832fde56a3737fffc124308918d7f38425baa9a935c6365f0a5f287b44c4d"} Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.691818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cjsg7" event={"ID":"89bf0a84-aef6-435e-9334-038c98f04c82","Type":"ContainerDied","Data":"16d696f9b9f37bf01ef60a180da675f0a7280ebe6663e6ce5ad7de228d2c43dd"} Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.691834 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d696f9b9f37bf01ef60a180da675f0a7280ebe6663e6ce5ad7de228d2c43dd" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.713224 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.842015 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx7c6\" (UniqueName: \"kubernetes.io/projected/89bf0a84-aef6-435e-9334-038c98f04c82-kube-api-access-tx7c6\") pod \"89bf0a84-aef6-435e-9334-038c98f04c82\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.842086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-config\") pod \"89bf0a84-aef6-435e-9334-038c98f04c82\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.842121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-dns-svc\") pod \"89bf0a84-aef6-435e-9334-038c98f04c82\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.842166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-nb\") pod \"89bf0a84-aef6-435e-9334-038c98f04c82\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.842272 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-sb\") pod \"89bf0a84-aef6-435e-9334-038c98f04c82\" (UID: \"89bf0a84-aef6-435e-9334-038c98f04c82\") " Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.861270 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89bf0a84-aef6-435e-9334-038c98f04c82-kube-api-access-tx7c6" (OuterVolumeSpecName: "kube-api-access-tx7c6") pod "89bf0a84-aef6-435e-9334-038c98f04c82" (UID: "89bf0a84-aef6-435e-9334-038c98f04c82"). InnerVolumeSpecName "kube-api-access-tx7c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.883640 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89bf0a84-aef6-435e-9334-038c98f04c82" (UID: "89bf0a84-aef6-435e-9334-038c98f04c82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.888107 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89bf0a84-aef6-435e-9334-038c98f04c82" (UID: "89bf0a84-aef6-435e-9334-038c98f04c82"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.901234 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-config" (OuterVolumeSpecName: "config") pod "89bf0a84-aef6-435e-9334-038c98f04c82" (UID: "89bf0a84-aef6-435e-9334-038c98f04c82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.907042 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89bf0a84-aef6-435e-9334-038c98f04c82" (UID: "89bf0a84-aef6-435e-9334-038c98f04c82"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.944364 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx7c6\" (UniqueName: \"kubernetes.io/projected/89bf0a84-aef6-435e-9334-038c98f04c82-kube-api-access-tx7c6\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.944405 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.944417 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.944428 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:54 crc kubenswrapper[4796]: I1212 04:51:54.944439 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89bf0a84-aef6-435e-9334-038c98f04c82-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:55 crc kubenswrapper[4796]: I1212 04:51:55.698773 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cjsg7" Dec 12 04:51:55 crc kubenswrapper[4796]: I1212 04:51:55.722520 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cjsg7"] Dec 12 04:51:55 crc kubenswrapper[4796]: I1212 04:51:55.727885 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cjsg7"] Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.036251 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-log-ovn\") pod \"353370ea-80af-4056-adb5-491a3a1c1cb8\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "353370ea-80af-4056-adb5-491a3a1c1cb8" (UID: "353370ea-80af-4056-adb5-491a3a1c1cb8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162484 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run\") pod \"353370ea-80af-4056-adb5-491a3a1c1cb8\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162519 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run" (OuterVolumeSpecName: "var-run") pod "353370ea-80af-4056-adb5-491a3a1c1cb8" (UID: "353370ea-80af-4056-adb5-491a3a1c1cb8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162564 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-scripts\") pod \"353370ea-80af-4056-adb5-491a3a1c1cb8\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162587 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run-ovn\") pod \"353370ea-80af-4056-adb5-491a3a1c1cb8\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162687 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmbnf\" (UniqueName: \"kubernetes.io/projected/353370ea-80af-4056-adb5-491a3a1c1cb8-kube-api-access-cmbnf\") pod \"353370ea-80af-4056-adb5-491a3a1c1cb8\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162713 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-additional-scripts\") pod \"353370ea-80af-4056-adb5-491a3a1c1cb8\" (UID: \"353370ea-80af-4056-adb5-491a3a1c1cb8\") " Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.162721 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "353370ea-80af-4056-adb5-491a3a1c1cb8" (UID: "353370ea-80af-4056-adb5-491a3a1c1cb8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.163323 4796 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.163339 4796 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.163347 4796 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/353370ea-80af-4056-adb5-491a3a1c1cb8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.163789 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "353370ea-80af-4056-adb5-491a3a1c1cb8" (UID: "353370ea-80af-4056-adb5-491a3a1c1cb8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.163861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-scripts" (OuterVolumeSpecName: "scripts") pod "353370ea-80af-4056-adb5-491a3a1c1cb8" (UID: "353370ea-80af-4056-adb5-491a3a1c1cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.167480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353370ea-80af-4056-adb5-491a3a1c1cb8-kube-api-access-cmbnf" (OuterVolumeSpecName: "kube-api-access-cmbnf") pod "353370ea-80af-4056-adb5-491a3a1c1cb8" (UID: "353370ea-80af-4056-adb5-491a3a1c1cb8"). InnerVolumeSpecName "kube-api-access-cmbnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.264586 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmbnf\" (UniqueName: \"kubernetes.io/projected/353370ea-80af-4056-adb5-491a3a1c1cb8-kube-api-access-cmbnf\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.264899 4796 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.264909 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353370ea-80af-4056-adb5-491a3a1c1cb8-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.706980 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ml9sj-config-lgl5b" event={"ID":"353370ea-80af-4056-adb5-491a3a1c1cb8","Type":"ContainerDied","Data":"cb2837c725b002b0e3842a1b7d587a069d77fed6edad4f0080f4ddcc0f60c700"} Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.707015 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb2837c725b002b0e3842a1b7d587a069d77fed6edad4f0080f4ddcc0f60c700" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.707066 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ml9sj-config-lgl5b" Dec 12 04:51:56 crc kubenswrapper[4796]: I1212 04:51:56.986710 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ml9sj" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.151528 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4jrtb"] Dec 12 04:51:57 crc kubenswrapper[4796]: E1212 04:51:57.151939 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" containerName="init" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.151966 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" containerName="init" Dec 12 04:51:57 crc kubenswrapper[4796]: E1212 04:51:57.151981 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353370ea-80af-4056-adb5-491a3a1c1cb8" containerName="ovn-config" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.151990 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="353370ea-80af-4056-adb5-491a3a1c1cb8" containerName="ovn-config" Dec 12 04:51:57 crc kubenswrapper[4796]: E1212 04:51:57.152026 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" containerName="dnsmasq-dns" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.152035 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" containerName="dnsmasq-dns" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.152251 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="353370ea-80af-4056-adb5-491a3a1c1cb8" containerName="ovn-config" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.152297 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" containerName="dnsmasq-dns" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.152967 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.165831 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wk87j" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.168486 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.178889 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4jrtb"] Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.192879 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ml9sj-config-lgl5b"] Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.202537 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ml9sj-config-lgl5b"] Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.281786 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-combined-ca-bundle\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.281849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvxg\" (UniqueName: \"kubernetes.io/projected/4803e636-bb92-4795-ad4a-76cbbb4e4edc-kube-api-access-lvvxg\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.281908 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-config-data\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.281940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-db-sync-config-data\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.383538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-combined-ca-bundle\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.383600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvxg\" (UniqueName: \"kubernetes.io/projected/4803e636-bb92-4795-ad4a-76cbbb4e4edc-kube-api-access-lvvxg\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.383666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-config-data\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.383698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-db-sync-config-data\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.387771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-db-sync-config-data\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.391804 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-config-data\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.395820 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-combined-ca-bundle\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.400760 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvxg\" (UniqueName: \"kubernetes.io/projected/4803e636-bb92-4795-ad4a-76cbbb4e4edc-kube-api-access-lvvxg\") pod \"glance-db-sync-4jrtb\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.421458 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353370ea-80af-4056-adb5-491a3a1c1cb8" path="/var/lib/kubelet/pods/353370ea-80af-4056-adb5-491a3a1c1cb8/volumes" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.422114 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89bf0a84-aef6-435e-9334-038c98f04c82" path="/var/lib/kubelet/pods/89bf0a84-aef6-435e-9334-038c98f04c82/volumes" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.469734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrtb" Dec 12 04:51:57 crc kubenswrapper[4796]: I1212 04:51:57.996127 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4jrtb"] Dec 12 04:51:58 crc kubenswrapper[4796]: I1212 04:51:58.729594 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrtb" event={"ID":"4803e636-bb92-4795-ad4a-76cbbb4e4edc","Type":"ContainerStarted","Data":"690a8b9914fba099d6e6467a11bf6b59453fe2df42fc1cad398627a7d42f1418"} Dec 12 04:51:59 crc kubenswrapper[4796]: I1212 04:51:59.739343 4796 generic.go:334] "Generic (PLEG): container finished" podID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerID="dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681" exitCode=0 Dec 12 04:51:59 crc kubenswrapper[4796]: I1212 04:51:59.739398 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f474c7f-e87c-4c21-8ebb-f0266779bceb","Type":"ContainerDied","Data":"dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681"} Dec 12 04:52:00 crc kubenswrapper[4796]: I1212 04:52:00.535628 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:52:00 crc kubenswrapper[4796]: E1212 04:52:00.535818 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 12 04:52:00 crc kubenswrapper[4796]: E1212 04:52:00.535986 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 12 04:52:00 crc kubenswrapper[4796]: E1212 04:52:00.536238 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift podName:09626c7b-0eba-4fe5-9598-ac562516cb98 nodeName:}" failed. No retries permitted until 2025-12-12 04:52:16.53601821 +0000 UTC m=+1127.412035357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift") pod "swift-storage-0" (UID: "09626c7b-0eba-4fe5-9598-ac562516cb98") : configmap "swift-ring-files" not found Dec 12 04:52:00 crc kubenswrapper[4796]: I1212 04:52:00.749947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f474c7f-e87c-4c21-8ebb-f0266779bceb","Type":"ContainerStarted","Data":"0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549"} Dec 12 04:52:00 crc kubenswrapper[4796]: I1212 04:52:00.750578 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 12 04:52:00 crc kubenswrapper[4796]: I1212 04:52:00.778969 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371952.075825 podStartE2EDuration="1m24.778951292s" podCreationTimestamp="2025-12-12 04:50:36 +0000 UTC" firstStartedPulling="2025-12-12 04:50:38.810678197 +0000 UTC m=+1029.686695344" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:00.769870146 +0000 UTC m=+1111.645887293" watchObservedRunningTime="2025-12-12 04:52:00.778951292 +0000 UTC m=+1111.654968429" Dec 12 04:52:02 crc kubenswrapper[4796]: I1212 04:52:02.786629 4796 generic.go:334] "Generic (PLEG): container finished" podID="db25cf2d-5a36-4289-b5d2-3a156acaee44" containerID="3dd035a4f13b03439ee33be47c07ea9b941e3cc376154afdd070fc1238b470b9" exitCode=0 Dec 12 04:52:02 crc kubenswrapper[4796]: I1212 04:52:02.786740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x6qjf" event={"ID":"db25cf2d-5a36-4289-b5d2-3a156acaee44","Type":"ContainerDied","Data":"3dd035a4f13b03439ee33be47c07ea9b941e3cc376154afdd070fc1238b470b9"} Dec 12 04:52:08 crc kubenswrapper[4796]: I1212 04:52:08.518380 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.312182 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322411 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn4mh\" (UniqueName: \"kubernetes.io/projected/db25cf2d-5a36-4289-b5d2-3a156acaee44-kube-api-access-mn4mh\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322659 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-scripts\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322694 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-swiftconf\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-combined-ca-bundle\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322781 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-ring-data-devices\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db25cf2d-5a36-4289-b5d2-3a156acaee44-etc-swift\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.322881 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-dispersionconf\") pod \"db25cf2d-5a36-4289-b5d2-3a156acaee44\" (UID: \"db25cf2d-5a36-4289-b5d2-3a156acaee44\") " Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.325599 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.326533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db25cf2d-5a36-4289-b5d2-3a156acaee44-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.342534 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db25cf2d-5a36-4289-b5d2-3a156acaee44-kube-api-access-mn4mh" (OuterVolumeSpecName: "kube-api-access-mn4mh") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "kube-api-access-mn4mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.352061 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.361954 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.391251 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.400205 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-scripts" (OuterVolumeSpecName: "scripts") pod "db25cf2d-5a36-4289-b5d2-3a156acaee44" (UID: "db25cf2d-5a36-4289-b5d2-3a156acaee44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440371 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn4mh\" (UniqueName: \"kubernetes.io/projected/db25cf2d-5a36-4289-b5d2-3a156acaee44-kube-api-access-mn4mh\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440425 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440439 4796 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440501 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440515 4796 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/db25cf2d-5a36-4289-b5d2-3a156acaee44-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440527 4796 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/db25cf2d-5a36-4289-b5d2-3a156acaee44-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.440539 4796 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/db25cf2d-5a36-4289-b5d2-3a156acaee44-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.862835 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrtb" event={"ID":"4803e636-bb92-4795-ad4a-76cbbb4e4edc","Type":"ContainerStarted","Data":"c28a591c50f93bc773a2c9900a86bce141ecbde4b93045b8e22b23bcf7a47a2d"} Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.864124 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x6qjf" event={"ID":"db25cf2d-5a36-4289-b5d2-3a156acaee44","Type":"ContainerDied","Data":"bce7702cfc9cde3cbf215dd0e528ba4bf2d5171538c6b8fd53a72c6990b4c92d"} Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.864183 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce7702cfc9cde3cbf215dd0e528ba4bf2d5171538c6b8fd53a72c6990b4c92d" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.864205 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x6qjf" Dec 12 04:52:11 crc kubenswrapper[4796]: I1212 04:52:11.882244 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4jrtb" podStartSLOduration=1.573310649 podStartE2EDuration="14.882222905s" podCreationTimestamp="2025-12-12 04:51:57 +0000 UTC" firstStartedPulling="2025-12-12 04:51:57.999945452 +0000 UTC m=+1108.875962599" lastFinishedPulling="2025-12-12 04:52:11.308857718 +0000 UTC m=+1122.184874855" observedRunningTime="2025-12-12 04:52:11.881496473 +0000 UTC m=+1122.757513630" watchObservedRunningTime="2025-12-12 04:52:11.882222905 +0000 UTC m=+1122.758240052" Dec 12 04:52:16 crc kubenswrapper[4796]: I1212 04:52:16.619998 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:52:16 crc kubenswrapper[4796]: I1212 04:52:16.633204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/09626c7b-0eba-4fe5-9598-ac562516cb98-etc-swift\") pod \"swift-storage-0\" (UID: \"09626c7b-0eba-4fe5-9598-ac562516cb98\") " pod="openstack/swift-storage-0" Dec 12 04:52:16 crc kubenswrapper[4796]: I1212 04:52:16.840734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 12 04:52:17 crc kubenswrapper[4796]: I1212 04:52:17.407030 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 12 04:52:17 crc kubenswrapper[4796]: W1212 04:52:17.415785 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09626c7b_0eba_4fe5_9598_ac562516cb98.slice/crio-f327efdeafafaa2ffb93b2a1177838a8195b8df730b7d36ae54340b4bb64a55d WatchSource:0}: Error finding container f327efdeafafaa2ffb93b2a1177838a8195b8df730b7d36ae54340b4bb64a55d: Status 404 returned error can't find the container with id f327efdeafafaa2ffb93b2a1177838a8195b8df730b7d36ae54340b4bb64a55d Dec 12 04:52:17 crc kubenswrapper[4796]: I1212 04:52:17.916526 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"f327efdeafafaa2ffb93b2a1177838a8195b8df730b7d36ae54340b4bb64a55d"} Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.142570 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.590624 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zp246"] Dec 12 04:52:18 crc kubenswrapper[4796]: E1212 04:52:18.590989 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db25cf2d-5a36-4289-b5d2-3a156acaee44" containerName="swift-ring-rebalance" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.591004 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="db25cf2d-5a36-4289-b5d2-3a156acaee44" containerName="swift-ring-rebalance" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.591190 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="db25cf2d-5a36-4289-b5d2-3a156acaee44" containerName="swift-ring-rebalance" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.591706 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.597617 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dfca-account-create-update-h429p"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.599394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.603766 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.610829 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zp246"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.618505 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dfca-account-create-update-h429p"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.655644 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tps95\" (UniqueName: \"kubernetes.io/projected/8da87676-800b-429b-8091-01d0756398ba-kube-api-access-tps95\") pod \"cinder-dfca-account-create-update-h429p\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.655721 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476f745e-37e2-44ce-a24d-3326352757da-operator-scripts\") pod \"cinder-db-create-zp246\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.655755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da87676-800b-429b-8091-01d0756398ba-operator-scripts\") pod \"cinder-dfca-account-create-update-h429p\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.655787 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvthx\" (UniqueName: \"kubernetes.io/projected/476f745e-37e2-44ce-a24d-3326352757da-kube-api-access-zvthx\") pod \"cinder-db-create-zp246\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.677692 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-gw45v"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.678823 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.690802 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gw45v"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.708037 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e29a-account-create-update-dqt68"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.709029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.712780 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.722426 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e29a-account-create-update-dqt68"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756704 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476f745e-37e2-44ce-a24d-3326352757da-operator-scripts\") pod \"cinder-db-create-zp246\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756751 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da87676-800b-429b-8091-01d0756398ba-operator-scripts\") pod \"cinder-dfca-account-create-update-h429p\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkk2\" (UniqueName: \"kubernetes.io/projected/7e668f1e-c987-43ad-b5de-06a419b8935d-kube-api-access-wxkk2\") pod \"barbican-db-create-gw45v\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvthx\" (UniqueName: \"kubernetes.io/projected/476f745e-37e2-44ce-a24d-3326352757da-kube-api-access-zvthx\") pod \"cinder-db-create-zp246\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756828 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ca6dbe-d88e-416d-bff4-2944a012764f-operator-scripts\") pod \"barbican-e29a-account-create-update-dqt68\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756875 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqtp\" (UniqueName: \"kubernetes.io/projected/a1ca6dbe-d88e-416d-bff4-2944a012764f-kube-api-access-6zqtp\") pod \"barbican-e29a-account-create-update-dqt68\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756906 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e668f1e-c987-43ad-b5de-06a419b8935d-operator-scripts\") pod \"barbican-db-create-gw45v\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.756931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tps95\" (UniqueName: \"kubernetes.io/projected/8da87676-800b-429b-8091-01d0756398ba-kube-api-access-tps95\") pod \"cinder-dfca-account-create-update-h429p\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.757805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da87676-800b-429b-8091-01d0756398ba-operator-scripts\") pod \"cinder-dfca-account-create-update-h429p\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.757950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476f745e-37e2-44ce-a24d-3326352757da-operator-scripts\") pod \"cinder-db-create-zp246\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.788844 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvthx\" (UniqueName: \"kubernetes.io/projected/476f745e-37e2-44ce-a24d-3326352757da-kube-api-access-zvthx\") pod \"cinder-db-create-zp246\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.788968 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tps95\" (UniqueName: \"kubernetes.io/projected/8da87676-800b-429b-8091-01d0756398ba-kube-api-access-tps95\") pod \"cinder-dfca-account-create-update-h429p\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.867137 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkk2\" (UniqueName: \"kubernetes.io/projected/7e668f1e-c987-43ad-b5de-06a419b8935d-kube-api-access-wxkk2\") pod \"barbican-db-create-gw45v\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.867600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ca6dbe-d88e-416d-bff4-2944a012764f-operator-scripts\") pod \"barbican-e29a-account-create-update-dqt68\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.867725 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqtp\" (UniqueName: \"kubernetes.io/projected/a1ca6dbe-d88e-416d-bff4-2944a012764f-kube-api-access-6zqtp\") pod \"barbican-e29a-account-create-update-dqt68\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.867826 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e668f1e-c987-43ad-b5de-06a419b8935d-operator-scripts\") pod \"barbican-db-create-gw45v\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.875239 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e668f1e-c987-43ad-b5de-06a419b8935d-operator-scripts\") pod \"barbican-db-create-gw45v\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.899792 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ca6dbe-d88e-416d-bff4-2944a012764f-operator-scripts\") pod \"barbican-e29a-account-create-update-dqt68\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.917478 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fb2cw"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.918155 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zp246" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.918708 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.924239 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.934607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4fcf-account-create-update-h9nv4"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.940369 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.941561 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fb2cw"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.944053 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.952390 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4fcf-account-create-update-h9nv4"] Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.956293 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkk2\" (UniqueName: \"kubernetes.io/projected/7e668f1e-c987-43ad-b5de-06a419b8935d-kube-api-access-wxkk2\") pod \"barbican-db-create-gw45v\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.970831 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqtp\" (UniqueName: \"kubernetes.io/projected/a1ca6dbe-d88e-416d-bff4-2944a012764f-kube-api-access-6zqtp\") pod \"barbican-e29a-account-create-update-dqt68\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.972202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4v2\" (UniqueName: \"kubernetes.io/projected/869d643d-6e95-4c96-aa41-570474e69ff4-kube-api-access-2t4v2\") pod \"neutron-4fcf-account-create-update-h9nv4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.972259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fca11a6-5060-41a2-8294-405e1a9a7869-operator-scripts\") pod \"neutron-db-create-fb2cw\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.972323 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrch\" (UniqueName: \"kubernetes.io/projected/6fca11a6-5060-41a2-8294-405e1a9a7869-kube-api-access-bmrch\") pod \"neutron-db-create-fb2cw\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.972377 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869d643d-6e95-4c96-aa41-570474e69ff4-operator-scripts\") pod \"neutron-4fcf-account-create-update-h9nv4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:18 crc kubenswrapper[4796]: I1212 04:52:18.995294 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.018986 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-b8llp"] Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.019928 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.024499 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.024499 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.024610 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.024634 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7r2qv" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.030772 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.042457 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8llp"] Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.074002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4v2\" (UniqueName: \"kubernetes.io/projected/869d643d-6e95-4c96-aa41-570474e69ff4-kube-api-access-2t4v2\") pod \"neutron-4fcf-account-create-update-h9nv4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.074755 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fca11a6-5060-41a2-8294-405e1a9a7869-operator-scripts\") pod \"neutron-db-create-fb2cw\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.075967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmrch\" (UniqueName: \"kubernetes.io/projected/6fca11a6-5060-41a2-8294-405e1a9a7869-kube-api-access-bmrch\") pod \"neutron-db-create-fb2cw\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.076136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869d643d-6e95-4c96-aa41-570474e69ff4-operator-scripts\") pod \"neutron-4fcf-account-create-update-h9nv4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.075482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fca11a6-5060-41a2-8294-405e1a9a7869-operator-scripts\") pod \"neutron-db-create-fb2cw\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.078263 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869d643d-6e95-4c96-aa41-570474e69ff4-operator-scripts\") pod \"neutron-4fcf-account-create-update-h9nv4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.091053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4v2\" (UniqueName: \"kubernetes.io/projected/869d643d-6e95-4c96-aa41-570474e69ff4-kube-api-access-2t4v2\") pod \"neutron-4fcf-account-create-update-h9nv4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.105783 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmrch\" (UniqueName: \"kubernetes.io/projected/6fca11a6-5060-41a2-8294-405e1a9a7869-kube-api-access-bmrch\") pod \"neutron-db-create-fb2cw\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.177868 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-combined-ca-bundle\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.177968 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-config-data\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.178124 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z65h\" (UniqueName: \"kubernetes.io/projected/ce82eef6-1a2a-45fa-8729-ea625e2863a9-kube-api-access-5z65h\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.253062 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.278769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-combined-ca-bundle\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.278852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-config-data\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.278892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z65h\" (UniqueName: \"kubernetes.io/projected/ce82eef6-1a2a-45fa-8729-ea625e2863a9-kube-api-access-5z65h\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.279413 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.283263 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-config-data\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.283863 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-combined-ca-bundle\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.305141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z65h\" (UniqueName: \"kubernetes.io/projected/ce82eef6-1a2a-45fa-8729-ea625e2863a9-kube-api-access-5z65h\") pod \"keystone-db-sync-b8llp\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.350768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.951310 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"628f776499098b27f5b2800cee10f2fc44cd188bbc9a6f01972f75a43594be7c"} Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.967546 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gw45v"] Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.972894 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zp246"] Dec 12 04:52:19 crc kubenswrapper[4796]: W1212 04:52:19.987803 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e668f1e_c987_43ad_b5de_06a419b8935d.slice/crio-22eb27872a7d44795abd032d8578364ab18a65557428483da86a350b7a2e3636 WatchSource:0}: Error finding container 22eb27872a7d44795abd032d8578364ab18a65557428483da86a350b7a2e3636: Status 404 returned error can't find the container with id 22eb27872a7d44795abd032d8578364ab18a65557428483da86a350b7a2e3636 Dec 12 04:52:19 crc kubenswrapper[4796]: I1212 04:52:19.999353 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e29a-account-create-update-dqt68"] Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.038431 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dfca-account-create-update-h429p"] Dec 12 04:52:20 crc kubenswrapper[4796]: W1212 04:52:20.101578 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1ca6dbe_d88e_416d_bff4_2944a012764f.slice/crio-9bcad7ed4e1ebf87231f88a7a9b7da5f3e1bb196fba6fc3d62253bf4acd5c6c1 WatchSource:0}: Error finding container 9bcad7ed4e1ebf87231f88a7a9b7da5f3e1bb196fba6fc3d62253bf4acd5c6c1: Status 404 returned error can't find the container with id 9bcad7ed4e1ebf87231f88a7a9b7da5f3e1bb196fba6fc3d62253bf4acd5c6c1 Dec 12 04:52:20 crc kubenswrapper[4796]: W1212 04:52:20.101921 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8da87676_800b_429b_8091_01d0756398ba.slice/crio-038c538cf79756a770bb9eb7e61139470bd5fbc82ed28aff18854bc812d1b029 WatchSource:0}: Error finding container 038c538cf79756a770bb9eb7e61139470bd5fbc82ed28aff18854bc812d1b029: Status 404 returned error can't find the container with id 038c538cf79756a770bb9eb7e61139470bd5fbc82ed28aff18854bc812d1b029 Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.165528 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fb2cw"] Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.183671 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4fcf-account-create-update-h9nv4"] Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.248713 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8llp"] Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.980814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"6b78ef821a4f9e9fe1832cd9a23460436342520fd5461e16a198207cf4d10e73"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.981150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"9c29be807f787c2ede8a5ca9ce7d5e547775f7b3cd558b7b304c6cbc5c8b30b2"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.981164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"556f334f6e7a27b31f12658c222bbc8ce933ed0692073a2b1ac47b8d366b794a"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.982507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfca-account-create-update-h429p" event={"ID":"8da87676-800b-429b-8091-01d0756398ba","Type":"ContainerStarted","Data":"9e7eb64db528e65506263db258cbd0969b9b18c7ffe9db938cf4fed2bc8d978d"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.982541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfca-account-create-update-h429p" event={"ID":"8da87676-800b-429b-8091-01d0756398ba","Type":"ContainerStarted","Data":"038c538cf79756a770bb9eb7e61139470bd5fbc82ed28aff18854bc812d1b029"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.987481 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e29a-account-create-update-dqt68" event={"ID":"a1ca6dbe-d88e-416d-bff4-2944a012764f","Type":"ContainerStarted","Data":"929e3eeac75ad0f1095a83431fa9111093dcad47e8adbb8550717c3c70580f23"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.987519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e29a-account-create-update-dqt68" event={"ID":"a1ca6dbe-d88e-416d-bff4-2944a012764f","Type":"ContainerStarted","Data":"9bcad7ed4e1ebf87231f88a7a9b7da5f3e1bb196fba6fc3d62253bf4acd5c6c1"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.990847 4796 generic.go:334] "Generic (PLEG): container finished" podID="476f745e-37e2-44ce-a24d-3326352757da" containerID="ebda42774f565ed31224ce6f6ae9f99428c77701dc748722055c74037b29ff04" exitCode=0 Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.990930 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zp246" event={"ID":"476f745e-37e2-44ce-a24d-3326352757da","Type":"ContainerDied","Data":"ebda42774f565ed31224ce6f6ae9f99428c77701dc748722055c74037b29ff04"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.990952 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zp246" event={"ID":"476f745e-37e2-44ce-a24d-3326352757da","Type":"ContainerStarted","Data":"450534afbaf662baf4ee9e1905faacdcd466740dd4129e78f94c5d063fc6fe0f"} Dec 12 04:52:20 crc kubenswrapper[4796]: I1212 04:52:20.993389 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8llp" event={"ID":"ce82eef6-1a2a-45fa-8729-ea625e2863a9","Type":"ContainerStarted","Data":"1c479d473506f6cd2f85a1b8b44dc9d77f6154f80813134876666b937f88b958"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.005232 4796 generic.go:334] "Generic (PLEG): container finished" podID="6fca11a6-5060-41a2-8294-405e1a9a7869" containerID="dad4cb511f37d8d6c9d7b540d91e770438db48c40160f1a267dbbacee9568ad3" exitCode=0 Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.005534 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fb2cw" event={"ID":"6fca11a6-5060-41a2-8294-405e1a9a7869","Type":"ContainerDied","Data":"dad4cb511f37d8d6c9d7b540d91e770438db48c40160f1a267dbbacee9568ad3"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.005573 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fb2cw" event={"ID":"6fca11a6-5060-41a2-8294-405e1a9a7869","Type":"ContainerStarted","Data":"7a255c39eb170110194395aee01ac586fee6923bec331251871c8daad1ce2672"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.009197 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dfca-account-create-update-h429p" podStartSLOduration=3.009182121 podStartE2EDuration="3.009182121s" podCreationTimestamp="2025-12-12 04:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:21.002296716 +0000 UTC m=+1131.878313863" watchObservedRunningTime="2025-12-12 04:52:21.009182121 +0000 UTC m=+1131.885199268" Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.014334 4796 generic.go:334] "Generic (PLEG): container finished" podID="4803e636-bb92-4795-ad4a-76cbbb4e4edc" containerID="c28a591c50f93bc773a2c9900a86bce141ecbde4b93045b8e22b23bcf7a47a2d" exitCode=0 Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.014403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrtb" event={"ID":"4803e636-bb92-4795-ad4a-76cbbb4e4edc","Type":"ContainerDied","Data":"c28a591c50f93bc773a2c9900a86bce141ecbde4b93045b8e22b23bcf7a47a2d"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.019043 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4fcf-account-create-update-h9nv4" event={"ID":"869d643d-6e95-4c96-aa41-570474e69ff4","Type":"ContainerStarted","Data":"61d7f86c420a473959122561b45d043ed087310800fdeb81dd713880b200c55c"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.019075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4fcf-account-create-update-h9nv4" event={"ID":"869d643d-6e95-4c96-aa41-570474e69ff4","Type":"ContainerStarted","Data":"a5de0b502941a4d74502e2724bd9dc18df5bdeb570221c6ae905ed301b6d7dc4"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.023206 4796 generic.go:334] "Generic (PLEG): container finished" podID="7e668f1e-c987-43ad-b5de-06a419b8935d" containerID="fd7df020bedbf00d0fb075b70be2503ed8a0b42d9ca1a91faada12709cf11ba5" exitCode=0 Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.023251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gw45v" event={"ID":"7e668f1e-c987-43ad-b5de-06a419b8935d","Type":"ContainerDied","Data":"fd7df020bedbf00d0fb075b70be2503ed8a0b42d9ca1a91faada12709cf11ba5"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.023288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gw45v" event={"ID":"7e668f1e-c987-43ad-b5de-06a419b8935d","Type":"ContainerStarted","Data":"22eb27872a7d44795abd032d8578364ab18a65557428483da86a350b7a2e3636"} Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.025885 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-e29a-account-create-update-dqt68" podStartSLOduration=3.025863275 podStartE2EDuration="3.025863275s" podCreationTimestamp="2025-12-12 04:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:21.021827979 +0000 UTC m=+1131.897845126" watchObservedRunningTime="2025-12-12 04:52:21.025863275 +0000 UTC m=+1131.901880422" Dec 12 04:52:21 crc kubenswrapper[4796]: I1212 04:52:21.130737 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-4fcf-account-create-update-h9nv4" podStartSLOduration=3.130716124 podStartE2EDuration="3.130716124s" podCreationTimestamp="2025-12-12 04:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:21.121683531 +0000 UTC m=+1131.997700678" watchObservedRunningTime="2025-12-12 04:52:21.130716124 +0000 UTC m=+1132.006733271" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.038999 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1ca6dbe-d88e-416d-bff4-2944a012764f" containerID="929e3eeac75ad0f1095a83431fa9111093dcad47e8adbb8550717c3c70580f23" exitCode=0 Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.039413 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e29a-account-create-update-dqt68" event={"ID":"a1ca6dbe-d88e-416d-bff4-2944a012764f","Type":"ContainerDied","Data":"929e3eeac75ad0f1095a83431fa9111093dcad47e8adbb8550717c3c70580f23"} Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.044573 4796 generic.go:334] "Generic (PLEG): container finished" podID="869d643d-6e95-4c96-aa41-570474e69ff4" containerID="61d7f86c420a473959122561b45d043ed087310800fdeb81dd713880b200c55c" exitCode=0 Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.044628 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4fcf-account-create-update-h9nv4" event={"ID":"869d643d-6e95-4c96-aa41-570474e69ff4","Type":"ContainerDied","Data":"61d7f86c420a473959122561b45d043ed087310800fdeb81dd713880b200c55c"} Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.046723 4796 generic.go:334] "Generic (PLEG): container finished" podID="8da87676-800b-429b-8091-01d0756398ba" containerID="9e7eb64db528e65506263db258cbd0969b9b18c7ffe9db938cf4fed2bc8d978d" exitCode=0 Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.046874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfca-account-create-update-h429p" event={"ID":"8da87676-800b-429b-8091-01d0756398ba","Type":"ContainerDied","Data":"9e7eb64db528e65506263db258cbd0969b9b18c7ffe9db938cf4fed2bc8d978d"} Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.913578 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zp246" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.920227 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.938364 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.956408 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fca11a6-5060-41a2-8294-405e1a9a7869-operator-scripts\") pod \"6fca11a6-5060-41a2-8294-405e1a9a7869\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.956901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmrch\" (UniqueName: \"kubernetes.io/projected/6fca11a6-5060-41a2-8294-405e1a9a7869-kube-api-access-bmrch\") pod \"6fca11a6-5060-41a2-8294-405e1a9a7869\" (UID: \"6fca11a6-5060-41a2-8294-405e1a9a7869\") " Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.957560 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476f745e-37e2-44ce-a24d-3326352757da-operator-scripts\") pod \"476f745e-37e2-44ce-a24d-3326352757da\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.957813 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvthx\" (UniqueName: \"kubernetes.io/projected/476f745e-37e2-44ce-a24d-3326352757da-kube-api-access-zvthx\") pod \"476f745e-37e2-44ce-a24d-3326352757da\" (UID: \"476f745e-37e2-44ce-a24d-3326352757da\") " Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.957930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e668f1e-c987-43ad-b5de-06a419b8935d-operator-scripts\") pod \"7e668f1e-c987-43ad-b5de-06a419b8935d\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.958065 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkk2\" (UniqueName: \"kubernetes.io/projected/7e668f1e-c987-43ad-b5de-06a419b8935d-kube-api-access-wxkk2\") pod \"7e668f1e-c987-43ad-b5de-06a419b8935d\" (UID: \"7e668f1e-c987-43ad-b5de-06a419b8935d\") " Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.958830 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e668f1e-c987-43ad-b5de-06a419b8935d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e668f1e-c987-43ad-b5de-06a419b8935d" (UID: "7e668f1e-c987-43ad-b5de-06a419b8935d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.958971 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476f745e-37e2-44ce-a24d-3326352757da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "476f745e-37e2-44ce-a24d-3326352757da" (UID: "476f745e-37e2-44ce-a24d-3326352757da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.959314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fca11a6-5060-41a2-8294-405e1a9a7869-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fca11a6-5060-41a2-8294-405e1a9a7869" (UID: "6fca11a6-5060-41a2-8294-405e1a9a7869"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.959625 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e668f1e-c987-43ad-b5de-06a419b8935d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.959744 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fca11a6-5060-41a2-8294-405e1a9a7869-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.959867 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/476f745e-37e2-44ce-a24d-3326352757da-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.971726 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrtb" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.972716 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e668f1e-c987-43ad-b5de-06a419b8935d-kube-api-access-wxkk2" (OuterVolumeSpecName: "kube-api-access-wxkk2") pod "7e668f1e-c987-43ad-b5de-06a419b8935d" (UID: "7e668f1e-c987-43ad-b5de-06a419b8935d"). InnerVolumeSpecName "kube-api-access-wxkk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.975734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fca11a6-5060-41a2-8294-405e1a9a7869-kube-api-access-bmrch" (OuterVolumeSpecName: "kube-api-access-bmrch") pod "6fca11a6-5060-41a2-8294-405e1a9a7869" (UID: "6fca11a6-5060-41a2-8294-405e1a9a7869"). InnerVolumeSpecName "kube-api-access-bmrch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:22 crc kubenswrapper[4796]: I1212 04:52:22.977387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476f745e-37e2-44ce-a24d-3326352757da-kube-api-access-zvthx" (OuterVolumeSpecName: "kube-api-access-zvthx") pod "476f745e-37e2-44ce-a24d-3326352757da" (UID: "476f745e-37e2-44ce-a24d-3326352757da"). InnerVolumeSpecName "kube-api-access-zvthx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.056171 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gw45v" event={"ID":"7e668f1e-c987-43ad-b5de-06a419b8935d","Type":"ContainerDied","Data":"22eb27872a7d44795abd032d8578364ab18a65557428483da86a350b7a2e3636"} Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.056207 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22eb27872a7d44795abd032d8578364ab18a65557428483da86a350b7a2e3636" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.056214 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gw45v" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jrtb" event={"ID":"4803e636-bb92-4795-ad4a-76cbbb4e4edc","Type":"ContainerDied","Data":"690a8b9914fba099d6e6467a11bf6b59453fe2df42fc1cad398627a7d42f1418"} Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060183 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690a8b9914fba099d6e6467a11bf6b59453fe2df42fc1cad398627a7d42f1418" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060245 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jrtb" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060643 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-config-data\") pod \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060676 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-combined-ca-bundle\") pod \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-db-sync-config-data\") pod \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.060870 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvxg\" (UniqueName: \"kubernetes.io/projected/4803e636-bb92-4795-ad4a-76cbbb4e4edc-kube-api-access-lvvxg\") pod \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\" (UID: \"4803e636-bb92-4795-ad4a-76cbbb4e4edc\") " Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.061774 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmrch\" (UniqueName: \"kubernetes.io/projected/6fca11a6-5060-41a2-8294-405e1a9a7869-kube-api-access-bmrch\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.061792 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvthx\" (UniqueName: \"kubernetes.io/projected/476f745e-37e2-44ce-a24d-3326352757da-kube-api-access-zvthx\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.061818 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkk2\" (UniqueName: \"kubernetes.io/projected/7e668f1e-c987-43ad-b5de-06a419b8935d-kube-api-access-wxkk2\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.063548 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zp246" event={"ID":"476f745e-37e2-44ce-a24d-3326352757da","Type":"ContainerDied","Data":"450534afbaf662baf4ee9e1905faacdcd466740dd4129e78f94c5d063fc6fe0f"} Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.063556 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zp246" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.063571 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="450534afbaf662baf4ee9e1905faacdcd466740dd4129e78f94c5d063fc6fe0f" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.064708 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4803e636-bb92-4795-ad4a-76cbbb4e4edc-kube-api-access-lvvxg" (OuterVolumeSpecName: "kube-api-access-lvvxg") pod "4803e636-bb92-4795-ad4a-76cbbb4e4edc" (UID: "4803e636-bb92-4795-ad4a-76cbbb4e4edc"). InnerVolumeSpecName "kube-api-access-lvvxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.065458 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fb2cw" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.065807 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fb2cw" event={"ID":"6fca11a6-5060-41a2-8294-405e1a9a7869","Type":"ContainerDied","Data":"7a255c39eb170110194395aee01ac586fee6923bec331251871c8daad1ce2672"} Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.065834 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a255c39eb170110194395aee01ac586fee6923bec331251871c8daad1ce2672" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.084017 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4803e636-bb92-4795-ad4a-76cbbb4e4edc" (UID: "4803e636-bb92-4795-ad4a-76cbbb4e4edc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.087757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4803e636-bb92-4795-ad4a-76cbbb4e4edc" (UID: "4803e636-bb92-4795-ad4a-76cbbb4e4edc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.156358 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-config-data" (OuterVolumeSpecName: "config-data") pod "4803e636-bb92-4795-ad4a-76cbbb4e4edc" (UID: "4803e636-bb92-4795-ad4a-76cbbb4e4edc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.164001 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.164033 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.164045 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4803e636-bb92-4795-ad4a-76cbbb4e4edc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.164057 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvxg\" (UniqueName: \"kubernetes.io/projected/4803e636-bb92-4795-ad4a-76cbbb4e4edc-kube-api-access-lvvxg\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.603530 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-l45wk"] Dec 12 04:52:23 crc kubenswrapper[4796]: E1212 04:52:23.609981 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e668f1e-c987-43ad-b5de-06a419b8935d" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610001 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e668f1e-c987-43ad-b5de-06a419b8935d" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: E1212 04:52:23.610026 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4803e636-bb92-4795-ad4a-76cbbb4e4edc" containerName="glance-db-sync" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610034 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4803e636-bb92-4795-ad4a-76cbbb4e4edc" containerName="glance-db-sync" Dec 12 04:52:23 crc kubenswrapper[4796]: E1212 04:52:23.610043 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fca11a6-5060-41a2-8294-405e1a9a7869" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610049 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fca11a6-5060-41a2-8294-405e1a9a7869" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: E1212 04:52:23.610066 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476f745e-37e2-44ce-a24d-3326352757da" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610072 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="476f745e-37e2-44ce-a24d-3326352757da" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610234 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fca11a6-5060-41a2-8294-405e1a9a7869" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610256 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="476f745e-37e2-44ce-a24d-3326352757da" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610317 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4803e636-bb92-4795-ad4a-76cbbb4e4edc" containerName="glance-db-sync" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.610339 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e668f1e-c987-43ad-b5de-06a419b8935d" containerName="mariadb-database-create" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.611298 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.614588 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-l45wk"] Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.713334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-dns-svc\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.713386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.713418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ql6\" (UniqueName: \"kubernetes.io/projected/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-kube-api-access-z2ql6\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.713586 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.713734 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-config\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.814800 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-dns-svc\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.814848 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.814868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ql6\" (UniqueName: \"kubernetes.io/projected/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-kube-api-access-z2ql6\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.814911 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.814954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-config\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.816047 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-dns-svc\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.816545 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.816729 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.816886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-config\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.842277 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ql6\" (UniqueName: \"kubernetes.io/projected/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-kube-api-access-z2ql6\") pod \"dnsmasq-dns-74dc88fc-l45wk\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:23 crc kubenswrapper[4796]: I1212 04:52:23.949887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.445304 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.462174 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.495393 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.511598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zqtp\" (UniqueName: \"kubernetes.io/projected/a1ca6dbe-d88e-416d-bff4-2944a012764f-kube-api-access-6zqtp\") pod \"a1ca6dbe-d88e-416d-bff4-2944a012764f\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.511634 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da87676-800b-429b-8091-01d0756398ba-operator-scripts\") pod \"8da87676-800b-429b-8091-01d0756398ba\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.511673 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tps95\" (UniqueName: \"kubernetes.io/projected/8da87676-800b-429b-8091-01d0756398ba-kube-api-access-tps95\") pod \"8da87676-800b-429b-8091-01d0756398ba\" (UID: \"8da87676-800b-429b-8091-01d0756398ba\") " Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.511706 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ca6dbe-d88e-416d-bff4-2944a012764f-operator-scripts\") pod \"a1ca6dbe-d88e-416d-bff4-2944a012764f\" (UID: \"a1ca6dbe-d88e-416d-bff4-2944a012764f\") " Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.511730 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869d643d-6e95-4c96-aa41-570474e69ff4-operator-scripts\") pod \"869d643d-6e95-4c96-aa41-570474e69ff4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.511749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t4v2\" (UniqueName: \"kubernetes.io/projected/869d643d-6e95-4c96-aa41-570474e69ff4-kube-api-access-2t4v2\") pod \"869d643d-6e95-4c96-aa41-570474e69ff4\" (UID: \"869d643d-6e95-4c96-aa41-570474e69ff4\") " Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.513533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869d643d-6e95-4c96-aa41-570474e69ff4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "869d643d-6e95-4c96-aa41-570474e69ff4" (UID: "869d643d-6e95-4c96-aa41-570474e69ff4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.514134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ca6dbe-d88e-416d-bff4-2944a012764f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1ca6dbe-d88e-416d-bff4-2944a012764f" (UID: "a1ca6dbe-d88e-416d-bff4-2944a012764f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.514516 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da87676-800b-429b-8091-01d0756398ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8da87676-800b-429b-8091-01d0756398ba" (UID: "8da87676-800b-429b-8091-01d0756398ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.525767 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da87676-800b-429b-8091-01d0756398ba-kube-api-access-tps95" (OuterVolumeSpecName: "kube-api-access-tps95") pod "8da87676-800b-429b-8091-01d0756398ba" (UID: "8da87676-800b-429b-8091-01d0756398ba"). InnerVolumeSpecName "kube-api-access-tps95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.525894 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869d643d-6e95-4c96-aa41-570474e69ff4-kube-api-access-2t4v2" (OuterVolumeSpecName: "kube-api-access-2t4v2") pod "869d643d-6e95-4c96-aa41-570474e69ff4" (UID: "869d643d-6e95-4c96-aa41-570474e69ff4"). InnerVolumeSpecName "kube-api-access-2t4v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.526724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ca6dbe-d88e-416d-bff4-2944a012764f-kube-api-access-6zqtp" (OuterVolumeSpecName: "kube-api-access-6zqtp") pod "a1ca6dbe-d88e-416d-bff4-2944a012764f" (UID: "a1ca6dbe-d88e-416d-bff4-2944a012764f"). InnerVolumeSpecName "kube-api-access-6zqtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.614914 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zqtp\" (UniqueName: \"kubernetes.io/projected/a1ca6dbe-d88e-416d-bff4-2944a012764f-kube-api-access-6zqtp\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.614941 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da87676-800b-429b-8091-01d0756398ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.614951 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tps95\" (UniqueName: \"kubernetes.io/projected/8da87676-800b-429b-8091-01d0756398ba-kube-api-access-tps95\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.614982 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ca6dbe-d88e-416d-bff4-2944a012764f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.614992 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869d643d-6e95-4c96-aa41-570474e69ff4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.615000 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t4v2\" (UniqueName: \"kubernetes.io/projected/869d643d-6e95-4c96-aa41-570474e69ff4-kube-api-access-2t4v2\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:29.641185 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-l45wk"] Dec 12 04:52:30 crc kubenswrapper[4796]: W1212 04:52:29.648436 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4cb39c4_0bcf_41ae_8022_ad8a6f3fe6a2.slice/crio-07b1e9fc9bf222c59d35a46f98419a3ed1a0c18acf95363f178bb57e111a2783 WatchSource:0}: Error finding container 07b1e9fc9bf222c59d35a46f98419a3ed1a0c18acf95363f178bb57e111a2783: Status 404 returned error can't find the container with id 07b1e9fc9bf222c59d35a46f98419a3ed1a0c18acf95363f178bb57e111a2783 Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.130692 4796 generic.go:334] "Generic (PLEG): container finished" podID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerID="3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390" exitCode=0 Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.131032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" event={"ID":"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2","Type":"ContainerDied","Data":"3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.131054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" event={"ID":"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2","Type":"ContainerStarted","Data":"07b1e9fc9bf222c59d35a46f98419a3ed1a0c18acf95363f178bb57e111a2783"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.134139 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4fcf-account-create-update-h9nv4" event={"ID":"869d643d-6e95-4c96-aa41-570474e69ff4","Type":"ContainerDied","Data":"a5de0b502941a4d74502e2724bd9dc18df5bdeb570221c6ae905ed301b6d7dc4"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.134160 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5de0b502941a4d74502e2724bd9dc18df5bdeb570221c6ae905ed301b6d7dc4" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.134202 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4fcf-account-create-update-h9nv4" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.141550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"621a334b5527577d0e1756de7d6fa069a9ed5cf8afbc7d6fe696da62354d0312"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.141586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"1ccc9db09a6172fe43877219ef9d1685176372c35ea5d763f44069412d0f411b"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.141595 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"c281c68f061bb0e551f1f094a288d63a3ea33dea7030e9b865ca42ad330eafde"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.141604 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"c3e66d5630e469e644ca62da0d045c063c9ea3f05a9bc542bf66a94fbd570dc3"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.143089 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfca-account-create-update-h429p" event={"ID":"8da87676-800b-429b-8091-01d0756398ba","Type":"ContainerDied","Data":"038c538cf79756a770bb9eb7e61139470bd5fbc82ed28aff18854bc812d1b029"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.143108 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="038c538cf79756a770bb9eb7e61139470bd5fbc82ed28aff18854bc812d1b029" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.143172 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfca-account-create-update-h429p" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.167484 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e29a-account-create-update-dqt68" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.167485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e29a-account-create-update-dqt68" event={"ID":"a1ca6dbe-d88e-416d-bff4-2944a012764f","Type":"ContainerDied","Data":"9bcad7ed4e1ebf87231f88a7a9b7da5f3e1bb196fba6fc3d62253bf4acd5c6c1"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.167698 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcad7ed4e1ebf87231f88a7a9b7da5f3e1bb196fba6fc3d62253bf4acd5c6c1" Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.168892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8llp" event={"ID":"ce82eef6-1a2a-45fa-8729-ea625e2863a9","Type":"ContainerStarted","Data":"7f363eea3732302ee1f0688578b7e4503e9fbcd48a801e647e9bbb8c6e14e585"} Dec 12 04:52:30 crc kubenswrapper[4796]: I1212 04:52:30.185564 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-b8llp" podStartSLOduration=3.250902056 podStartE2EDuration="12.185546979s" podCreationTimestamp="2025-12-12 04:52:18 +0000 UTC" firstStartedPulling="2025-12-12 04:52:20.296544986 +0000 UTC m=+1131.172562133" lastFinishedPulling="2025-12-12 04:52:29.231189909 +0000 UTC m=+1140.107207056" observedRunningTime="2025-12-12 04:52:30.180028415 +0000 UTC m=+1141.056045562" watchObservedRunningTime="2025-12-12 04:52:30.185546979 +0000 UTC m=+1141.061564126" Dec 12 04:52:31 crc kubenswrapper[4796]: I1212 04:52:31.181775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" event={"ID":"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2","Type":"ContainerStarted","Data":"e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e"} Dec 12 04:52:31 crc kubenswrapper[4796]: I1212 04:52:31.181833 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:31 crc kubenswrapper[4796]: I1212 04:52:31.209548 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" podStartSLOduration=8.209527061 podStartE2EDuration="8.209527061s" podCreationTimestamp="2025-12-12 04:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:31.20120558 +0000 UTC m=+1142.077222757" watchObservedRunningTime="2025-12-12 04:52:31.209527061 +0000 UTC m=+1142.085544208" Dec 12 04:52:32 crc kubenswrapper[4796]: I1212 04:52:32.191150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"d0fdb21279f9fdc2a079bf52a3150f00af9ee81d362b17ebeee875d5db38b00c"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.201692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"1e59524fb3b8139c7becca62a466849ec73f14be6a813aca4f66f04ec36ed2e2"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.202780 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"38989334bff7862ce19b1b2627a76f9d21ee2b0455ce71846ed90096b5b3204a"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.202908 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"53d3157471786e42544634b1d642253a1b927dbeab106ca55f873dcd3b1b97a9"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.202970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"7073e0d12066a59321f41ad50814171b142dfbab05b33646ba068fe939c75530"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.203024 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"2fbfae2a8291d1e189ba862a6f28015aba836aa2014a7c3c65be121de8921ccd"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.203075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"09626c7b-0eba-4fe5-9598-ac562516cb98","Type":"ContainerStarted","Data":"d90ccf13c8d12a871fb7e5fd7701c276352c2a9c466f75385af5d8788229f8b9"} Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.240249 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.681893983 podStartE2EDuration="50.240228344s" podCreationTimestamp="2025-12-12 04:51:43 +0000 UTC" firstStartedPulling="2025-12-12 04:52:17.41847839 +0000 UTC m=+1128.294495557" lastFinishedPulling="2025-12-12 04:52:31.976812771 +0000 UTC m=+1142.852829918" observedRunningTime="2025-12-12 04:52:33.232952267 +0000 UTC m=+1144.108969414" watchObservedRunningTime="2025-12-12 04:52:33.240228344 +0000 UTC m=+1144.116245481" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.517091 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-l45wk"] Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.554670 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-djdqk"] Dec 12 04:52:33 crc kubenswrapper[4796]: E1212 04:52:33.555249 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869d643d-6e95-4c96-aa41-570474e69ff4" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.555357 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="869d643d-6e95-4c96-aa41-570474e69ff4" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: E1212 04:52:33.555446 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da87676-800b-429b-8091-01d0756398ba" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.555516 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da87676-800b-429b-8091-01d0756398ba" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: E1212 04:52:33.555582 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ca6dbe-d88e-416d-bff4-2944a012764f" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.555649 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ca6dbe-d88e-416d-bff4-2944a012764f" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.555912 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="869d643d-6e95-4c96-aa41-570474e69ff4" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.555987 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ca6dbe-d88e-416d-bff4-2944a012764f" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.556054 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da87676-800b-429b-8091-01d0756398ba" containerName="mariadb-account-create-update" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.557021 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.563822 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.570143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-djdqk"] Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.698822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.699048 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.699069 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.699096 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.699120 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.699405 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ngh\" (UniqueName: \"kubernetes.io/projected/9ebb900f-974d-4d6f-84b4-eb01653905c2-kube-api-access-f4ngh\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.801092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.801147 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.801167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.801197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.801226 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.801315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ngh\" (UniqueName: \"kubernetes.io/projected/9ebb900f-974d-4d6f-84b4-eb01653905c2-kube-api-access-f4ngh\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.802505 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.803106 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.803727 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.804790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.804820 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.828032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ngh\" (UniqueName: \"kubernetes.io/projected/9ebb900f-974d-4d6f-84b4-eb01653905c2-kube-api-access-f4ngh\") pod \"dnsmasq-dns-5f59b8f679-djdqk\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:33 crc kubenswrapper[4796]: I1212 04:52:33.874405 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:34 crc kubenswrapper[4796]: I1212 04:52:34.209393 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce82eef6-1a2a-45fa-8729-ea625e2863a9" containerID="7f363eea3732302ee1f0688578b7e4503e9fbcd48a801e647e9bbb8c6e14e585" exitCode=0 Dec 12 04:52:34 crc kubenswrapper[4796]: I1212 04:52:34.209475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8llp" event={"ID":"ce82eef6-1a2a-45fa-8729-ea625e2863a9","Type":"ContainerDied","Data":"7f363eea3732302ee1f0688578b7e4503e9fbcd48a801e647e9bbb8c6e14e585"} Dec 12 04:52:34 crc kubenswrapper[4796]: I1212 04:52:34.209977 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerName="dnsmasq-dns" containerID="cri-o://e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e" gracePeriod=10 Dec 12 04:52:34 crc kubenswrapper[4796]: I1212 04:52:34.454371 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-djdqk"] Dec 12 04:52:34 crc kubenswrapper[4796]: W1212 04:52:34.473680 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ebb900f_974d_4d6f_84b4_eb01653905c2.slice/crio-6ed84e1783764c0bb96564d2775f3f330328f9622a612882d4979e9bfe8e8153 WatchSource:0}: Error finding container 6ed84e1783764c0bb96564d2775f3f330328f9622a612882d4979e9bfe8e8153: Status 404 returned error can't find the container with id 6ed84e1783764c0bb96564d2775f3f330328f9622a612882d4979e9bfe8e8153 Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.101368 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.218687 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerID="87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70" exitCode=0 Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.218765 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" event={"ID":"9ebb900f-974d-4d6f-84b4-eb01653905c2","Type":"ContainerDied","Data":"87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70"} Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.218811 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" event={"ID":"9ebb900f-974d-4d6f-84b4-eb01653905c2","Type":"ContainerStarted","Data":"6ed84e1783764c0bb96564d2775f3f330328f9622a612882d4979e9bfe8e8153"} Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.221147 4796 generic.go:334] "Generic (PLEG): container finished" podID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerID="e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e" exitCode=0 Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.221196 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.221230 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" event={"ID":"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2","Type":"ContainerDied","Data":"e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e"} Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.221322 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-l45wk" event={"ID":"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2","Type":"ContainerDied","Data":"07b1e9fc9bf222c59d35a46f98419a3ed1a0c18acf95363f178bb57e111a2783"} Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.221348 4796 scope.go:117] "RemoveContainer" containerID="e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.229807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-sb\") pod \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.229884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2ql6\" (UniqueName: \"kubernetes.io/projected/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-kube-api-access-z2ql6\") pod \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.229931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-config\") pod \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.229980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-nb\") pod \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.230040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-dns-svc\") pod \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\" (UID: \"c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.242104 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-kube-api-access-z2ql6" (OuterVolumeSpecName: "kube-api-access-z2ql6") pod "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" (UID: "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2"). InnerVolumeSpecName "kube-api-access-z2ql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.265840 4796 scope.go:117] "RemoveContainer" containerID="3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.279466 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" (UID: "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.292512 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" (UID: "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.314164 4796 scope.go:117] "RemoveContainer" containerID="e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.314681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" (UID: "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.314841 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-config" (OuterVolumeSpecName: "config") pod "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" (UID: "c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:35 crc kubenswrapper[4796]: E1212 04:52:35.325494 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e\": container with ID starting with e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e not found: ID does not exist" containerID="e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.325542 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e"} err="failed to get container status \"e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e\": rpc error: code = NotFound desc = could not find container \"e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e\": container with ID starting with e321295232e1ade94f814e95e01b1b096b8bb53e9ee2de6bce1974043316730e not found: ID does not exist" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.325569 4796 scope.go:117] "RemoveContainer" containerID="3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.332251 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.332307 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.332317 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.332326 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2ql6\" (UniqueName: \"kubernetes.io/projected/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-kube-api-access-z2ql6\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.332335 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:35 crc kubenswrapper[4796]: E1212 04:52:35.348418 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390\": container with ID starting with 3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390 not found: ID does not exist" containerID="3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.348475 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390"} err="failed to get container status \"3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390\": rpc error: code = NotFound desc = could not find container \"3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390\": container with ID starting with 3c9d216cb46ec8780709d1f92afa9e1c9bf2dbe0e091cd697aa74ec8263ca390 not found: ID does not exist" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.649478 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-l45wk"] Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.666827 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-l45wk"] Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.809966 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.952789 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-config-data\") pod \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.952934 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-combined-ca-bundle\") pod \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.952963 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z65h\" (UniqueName: \"kubernetes.io/projected/ce82eef6-1a2a-45fa-8729-ea625e2863a9-kube-api-access-5z65h\") pod \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\" (UID: \"ce82eef6-1a2a-45fa-8729-ea625e2863a9\") " Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.958650 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce82eef6-1a2a-45fa-8729-ea625e2863a9-kube-api-access-5z65h" (OuterVolumeSpecName: "kube-api-access-5z65h") pod "ce82eef6-1a2a-45fa-8729-ea625e2863a9" (UID: "ce82eef6-1a2a-45fa-8729-ea625e2863a9"). InnerVolumeSpecName "kube-api-access-5z65h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:35 crc kubenswrapper[4796]: I1212 04:52:35.994160 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce82eef6-1a2a-45fa-8729-ea625e2863a9" (UID: "ce82eef6-1a2a-45fa-8729-ea625e2863a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.008313 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-config-data" (OuterVolumeSpecName: "config-data") pod "ce82eef6-1a2a-45fa-8729-ea625e2863a9" (UID: "ce82eef6-1a2a-45fa-8729-ea625e2863a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.055211 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.055252 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z65h\" (UniqueName: \"kubernetes.io/projected/ce82eef6-1a2a-45fa-8729-ea625e2863a9-kube-api-access-5z65h\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.055267 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce82eef6-1a2a-45fa-8729-ea625e2863a9-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.231416 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" event={"ID":"9ebb900f-974d-4d6f-84b4-eb01653905c2","Type":"ContainerStarted","Data":"5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0"} Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.231508 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.233853 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8llp" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.233846 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8llp" event={"ID":"ce82eef6-1a2a-45fa-8729-ea625e2863a9","Type":"ContainerDied","Data":"1c479d473506f6cd2f85a1b8b44dc9d77f6154f80813134876666b937f88b958"} Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.234003 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c479d473506f6cd2f85a1b8b44dc9d77f6154f80813134876666b937f88b958" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.260437 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" podStartSLOduration=3.26042041 podStartE2EDuration="3.26042041s" podCreationTimestamp="2025-12-12 04:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:36.248818616 +0000 UTC m=+1147.124835843" watchObservedRunningTime="2025-12-12 04:52:36.26042041 +0000 UTC m=+1147.136437557" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.506391 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fkmf2"] Dec 12 04:52:36 crc kubenswrapper[4796]: E1212 04:52:36.506807 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerName="dnsmasq-dns" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.506831 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerName="dnsmasq-dns" Dec 12 04:52:36 crc kubenswrapper[4796]: E1212 04:52:36.506853 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce82eef6-1a2a-45fa-8729-ea625e2863a9" containerName="keystone-db-sync" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.506861 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce82eef6-1a2a-45fa-8729-ea625e2863a9" containerName="keystone-db-sync" Dec 12 04:52:36 crc kubenswrapper[4796]: E1212 04:52:36.506882 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerName="init" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.506891 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerName="init" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.507109 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" containerName="dnsmasq-dns" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.507140 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce82eef6-1a2a-45fa-8729-ea625e2863a9" containerName="keystone-db-sync" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.507904 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.514424 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.515636 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7r2qv" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.520435 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-djdqk"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.523410 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.528117 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.530080 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.532592 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fkmf2"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.607835 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d4ffk"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.609124 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.633777 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d4ffk"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.664417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-config-data\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.664483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-fernet-keys\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.664512 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-scripts\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.664555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-credential-keys\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.664578 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-combined-ca-bundle\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.664646 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ls6\" (UniqueName: \"kubernetes.io/projected/4273160c-2433-4765-a0fb-70700a3378d9-kube-api-access-89ls6\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.714607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lr2pq"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.715656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.722016 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fdvrn" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.722229 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.722376 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.754098 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lr2pq"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.765888 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-fernet-keys\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.765965 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-scripts\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766064 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-credential-keys\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766194 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766246 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-combined-ca-bundle\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766265 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766320 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ls6\" (UniqueName: \"kubernetes.io/projected/4273160c-2433-4765-a0fb-70700a3378d9-kube-api-access-89ls6\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766348 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj7b2\" (UniqueName: \"kubernetes.io/projected/5bea106d-1efb-4240-b08e-e9263d39a0ae-kube-api-access-zj7b2\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766392 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-config\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.766457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-config-data\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.786098 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-config-data\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.787607 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-scripts\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.788006 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-fernet-keys\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.790834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-combined-ca-bundle\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.812573 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wwzfw"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.813724 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.817726 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.817920 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.817995 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-credential-keys\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.818168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vg8z6" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.865216 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ls6\" (UniqueName: \"kubernetes.io/projected/4273160c-2433-4765-a0fb-70700a3378d9-kube-api-access-89ls6\") pod \"keystone-bootstrap-fkmf2\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.867859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj7b2\" (UniqueName: \"kubernetes.io/projected/5bea106d-1efb-4240-b08e-e9263d39a0ae-kube-api-access-zj7b2\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.867920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-config\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.867942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.867968 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-scripts\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.867992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2af8481-6c64-4dc2-8028-b5a548dca4ff-etc-machine-id\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-combined-ca-bundle\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868061 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868080 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-db-sync-config-data\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rpl\" (UniqueName: \"kubernetes.io/projected/f2af8481-6c64-4dc2-8028-b5a548dca4ff-kube-api-access-w4rpl\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-config-data\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.868171 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.869117 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.869706 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.869771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-config\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.870013 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.870249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.887554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-686cc94ff9-lggnr"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.894414 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.901332 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.901705 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.901820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.901938 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-k5r88" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.902204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj7b2\" (UniqueName: \"kubernetes.io/projected/5bea106d-1efb-4240-b08e-e9263d39a0ae-kube-api-access-zj7b2\") pod \"dnsmasq-dns-bbf5cc879-d4ffk\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.932530 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.938588 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wwzfw"] Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969637 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-config-data\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969723 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-horizon-secret-key\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndk8k\" (UniqueName: \"kubernetes.io/projected/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-kube-api-access-ndk8k\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969776 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-scripts\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2af8481-6c64-4dc2-8028-b5a548dca4ff-etc-machine-id\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-combined-ca-bundle\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-config-data\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-logs\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969869 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-scripts\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hbf\" (UniqueName: \"kubernetes.io/projected/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-kube-api-access-89hbf\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969917 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-combined-ca-bundle\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-db-sync-config-data\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.969979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-config\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.970002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rpl\" (UniqueName: \"kubernetes.io/projected/f2af8481-6c64-4dc2-8028-b5a548dca4ff-kube-api-access-w4rpl\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.979115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-db-sync-config-data\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.982238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2af8481-6c64-4dc2-8028-b5a548dca4ff-etc-machine-id\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.985957 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-scripts\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.987408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-combined-ca-bundle\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:36 crc kubenswrapper[4796]: I1212 04:52:36.988135 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-686cc94ff9-lggnr"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:36.999161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-config-data\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.061839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rpl\" (UniqueName: \"kubernetes.io/projected/f2af8481-6c64-4dc2-8028-b5a548dca4ff-kube-api-access-w4rpl\") pod \"cinder-db-sync-lr2pq\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-horizon-secret-key\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndk8k\" (UniqueName: \"kubernetes.io/projected/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-kube-api-access-ndk8k\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078430 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-config-data\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078450 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-scripts\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-logs\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078487 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hbf\" (UniqueName: \"kubernetes.io/projected/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-kube-api-access-89hbf\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-combined-ca-bundle\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.078568 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-config\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.084388 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-s672f"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.087338 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.087354 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-scripts\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.088521 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-logs\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.089306 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-config-data\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.098014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-combined-ca-bundle\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.098826 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-horizon-secret-key\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.106992 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-config\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.107159 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.107423 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v8m2j" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.131150 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.142476 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s672f"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.165240 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndk8k\" (UniqueName: \"kubernetes.io/projected/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-kube-api-access-ndk8k\") pod \"horizon-686cc94ff9-lggnr\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.180751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hbf\" (UniqueName: \"kubernetes.io/projected/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-kube-api-access-89hbf\") pod \"neutron-db-sync-wwzfw\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.181670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-combined-ca-bundle\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.181715 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-db-sync-config-data\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.181751 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6567\" (UniqueName: \"kubernetes.io/projected/b9082485-1887-4b6d-8e1f-371825f61dfc-kube-api-access-z6567\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.209480 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d574cc9cf-zb7lp"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.210946 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.239608 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d574cc9cf-zb7lp"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.264209 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ft5nd"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.268115 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.275656 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292150 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dxhm5" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292174 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-scripts\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292301 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-db-sync-config-data\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292347 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhld\" (UniqueName: \"kubernetes.io/projected/1aa5f34c-65ff-426f-9752-e88125dc10aa-kube-api-access-9hhld\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292391 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1aa5f34c-65ff-426f-9752-e88125dc10aa-horizon-secret-key\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6567\" (UniqueName: \"kubernetes.io/projected/b9082485-1887-4b6d-8e1f-371825f61dfc-kube-api-access-z6567\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-config-data\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292549 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292578 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa5f34c-65ff-426f-9752-e88125dc10aa-logs\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.292733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-combined-ca-bundle\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.294224 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.305909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-db-sync-config-data\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.313669 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-combined-ca-bundle\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.327540 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.339795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.354926 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d4ffk"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.382880 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6567\" (UniqueName: \"kubernetes.io/projected/b9082485-1887-4b6d-8e1f-371825f61dfc-kube-api-access-z6567\") pod \"barbican-db-sync-s672f\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405613 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhld\" (UniqueName: \"kubernetes.io/projected/1aa5f34c-65ff-426f-9752-e88125dc10aa-kube-api-access-9hhld\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1aa5f34c-65ff-426f-9752-e88125dc10aa-horizon-secret-key\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405716 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cds9f\" (UniqueName: \"kubernetes.io/projected/32723a76-dbe0-493d-9a87-5c2f46912a71-kube-api-access-cds9f\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405770 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32723a76-dbe0-493d-9a87-5c2f46912a71-logs\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405794 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-config-data\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405828 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-combined-ca-bundle\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa5f34c-65ff-426f-9752-e88125dc10aa-logs\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-scripts\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.405992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-scripts\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.411586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-config-data\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.412181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa5f34c-65ff-426f-9752-e88125dc10aa-logs\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.413543 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-scripts\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.416364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1aa5f34c-65ff-426f-9752-e88125dc10aa-horizon-secret-key\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.435996 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhld\" (UniqueName: \"kubernetes.io/projected/1aa5f34c-65ff-426f-9752-e88125dc10aa-kube-api-access-9hhld\") pod \"horizon-d574cc9cf-zb7lp\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.443440 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s672f" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.508121 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2" path="/var/lib/kubelet/pods/c4cb39c4-0bcf-41ae-8022-ad8a6f3fe6a2/volumes" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.508978 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ft5nd"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.509010 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5rwzk"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.525859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-combined-ca-bundle\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.526151 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-scripts\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.526810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cds9f\" (UniqueName: \"kubernetes.io/projected/32723a76-dbe0-493d-9a87-5c2f46912a71-kube-api-access-cds9f\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.526848 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.526877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32723a76-dbe0-493d-9a87-5c2f46912a71-logs\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.530397 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.533088 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.533592 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.534515 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32723a76-dbe0-493d-9a87-5c2f46912a71-logs\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.535540 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.535787 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.539562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-scripts\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.540223 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.551866 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-combined-ca-bundle\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.552231 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.577164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cds9f\" (UniqueName: \"kubernetes.io/projected/32723a76-dbe0-493d-9a87-5c2f46912a71-kube-api-access-cds9f\") pod \"placement-db-sync-ft5nd\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.601443 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5rwzk"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.614386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ft5nd" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.623737 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.628037 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.628244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktwr\" (UniqueName: \"kubernetes.io/projected/c16aa3d8-e979-4370-bda3-22d68070a7ff-kube-api-access-2ktwr\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630307 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-config-data\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzhs\" (UniqueName: \"kubernetes.io/projected/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-kube-api-access-8dzhs\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630447 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-run-httpd\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630493 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630593 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630617 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-log-httpd\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630637 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-config\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630677 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-scripts\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630712 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.630749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.640858 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.641365 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.641618 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.644066 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wk87j" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.646112 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.700969 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.740149 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-config-data\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dzhs\" (UniqueName: \"kubernetes.io/projected/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-kube-api-access-8dzhs\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744348 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-run-httpd\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744421 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744524 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744558 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcdl\" (UniqueName: \"kubernetes.io/projected/792c112e-eac1-4ede-a03d-5871e4679e17-kube-api-access-6mcdl\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744584 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-scripts\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744617 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744646 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-log-httpd\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-config\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744709 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-logs\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744738 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-scripts\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744769 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744804 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktwr\" (UniqueName: \"kubernetes.io/projected/c16aa3d8-e979-4370-bda3-22d68070a7ff-kube-api-access-2ktwr\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744922 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.744956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-config-data\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.745910 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.749876 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.751402 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.752175 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-run-httpd\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.754973 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.756261 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-log-httpd\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.757796 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-config\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.758826 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.768906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.773892 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-scripts\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.773980 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.775183 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.785007 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.785883 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.788609 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dzhs\" (UniqueName: \"kubernetes.io/projected/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-kube-api-access-8dzhs\") pod \"dnsmasq-dns-56df8fb6b7-5rwzk\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.789586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-config-data\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.799850 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktwr\" (UniqueName: \"kubernetes.io/projected/c16aa3d8-e979-4370-bda3-22d68070a7ff-kube-api-access-2ktwr\") pod \"ceilometer-0\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847767 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847828 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcdl\" (UniqueName: \"kubernetes.io/projected/792c112e-eac1-4ede-a03d-5871e4679e17-kube-api-access-6mcdl\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847870 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-scripts\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847945 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-logs\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.847992 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.848043 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-config-data\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.848059 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5mj\" (UniqueName: \"kubernetes.io/projected/38281847-cdff-430d-a6f3-d029c2032974-kube-api-access-ww5mj\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.848093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-logs\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.848121 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.848143 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.849578 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.849617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.850970 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-logs\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.851127 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.855828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.861926 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-config-data\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.868482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcdl\" (UniqueName: \"kubernetes.io/projected/792c112e-eac1-4ede-a03d-5871e4679e17-kube-api-access-6mcdl\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.875421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-scripts\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.918790 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.924336 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.937335 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-logs\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950514 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950568 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.950707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww5mj\" (UniqueName: \"kubernetes.io/projected/38281847-cdff-430d-a6f3-d029c2032974-kube-api-access-ww5mj\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.951432 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-logs\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.953690 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.955003 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.972970 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.973400 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.976350 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.984870 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww5mj\" (UniqueName: \"kubernetes.io/projected/38281847-cdff-430d-a6f3-d029c2032974-kube-api-access-ww5mj\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.985381 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:52:37 crc kubenswrapper[4796]: I1212 04:52:37.994828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.016509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.088998 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.312498 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerName="dnsmasq-dns" containerID="cri-o://5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0" gracePeriod=10 Dec 12 04:52:38 crc kubenswrapper[4796]: W1212 04:52:38.373882 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bea106d_1efb_4240_b08e_e9263d39a0ae.slice/crio-aa748a8be9a074d6ebd204a7635492de85552454f36b54bc6e7c1f039a659176 WatchSource:0}: Error finding container aa748a8be9a074d6ebd204a7635492de85552454f36b54bc6e7c1f039a659176: Status 404 returned error can't find the container with id aa748a8be9a074d6ebd204a7635492de85552454f36b54bc6e7c1f039a659176 Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.426251 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d4ffk"] Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.482537 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fkmf2"] Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.682039 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lr2pq"] Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.753815 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-686cc94ff9-lggnr"] Dec 12 04:52:38 crc kubenswrapper[4796]: W1212 04:52:38.761620 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c0b2dfd_d78a_45ba_aac1_fab7457e322c.slice/crio-6f063ec95f0f35d9654b976064cdefd610c626baecca0906221330e4032e7637 WatchSource:0}: Error finding container 6f063ec95f0f35d9654b976064cdefd610c626baecca0906221330e4032e7637: Status 404 returned error can't find the container with id 6f063ec95f0f35d9654b976064cdefd610c626baecca0906221330e4032e7637 Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.817651 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wwzfw"] Dec 12 04:52:38 crc kubenswrapper[4796]: I1212 04:52:38.989343 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ft5nd"] Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:38.998779 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d574cc9cf-zb7lp"] Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.006949 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s672f"] Dec 12 04:52:39 crc kubenswrapper[4796]: W1212 04:52:39.026499 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9082485_1887_4b6d_8e1f_371825f61dfc.slice/crio-4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9 WatchSource:0}: Error finding container 4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9: Status 404 returned error can't find the container with id 4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9 Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.238256 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5rwzk"] Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.280443 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.322720 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.370240 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wwzfw" event={"ID":"9c0b2dfd-d78a-45ba-aac1-fab7457e322c","Type":"ContainerStarted","Data":"267dc094d0f17957dc3d7616911386fbd4df0d13a6a85376914267685b0644ee"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.370294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wwzfw" event={"ID":"9c0b2dfd-d78a-45ba-aac1-fab7457e322c","Type":"ContainerStarted","Data":"6f063ec95f0f35d9654b976064cdefd610c626baecca0906221330e4032e7637"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.385622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lr2pq" event={"ID":"f2af8481-6c64-4dc2-8028-b5a548dca4ff","Type":"ContainerStarted","Data":"b4f3e0c424c521bcb7ff6783c36e17f2473ffc080bd6520ec555cc5c3c816f7f"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.416623 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-swift-storage-0\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.416682 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.416715 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-svc\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.416753 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-nb\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.416875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4ngh\" (UniqueName: \"kubernetes.io/projected/9ebb900f-974d-4d6f-84b4-eb01653905c2-kube-api-access-f4ngh\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.416910 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-sb\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.497180 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ebb900f-974d-4d6f-84b4-eb01653905c2-kube-api-access-f4ngh" (OuterVolumeSpecName: "kube-api-access-f4ngh") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "kube-api-access-f4ngh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.525012 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4ngh\" (UniqueName: \"kubernetes.io/projected/9ebb900f-974d-4d6f-84b4-eb01653905c2-kube-api-access-f4ngh\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.554751 4796 generic.go:334] "Generic (PLEG): container finished" podID="5bea106d-1efb-4240-b08e-e9263d39a0ae" containerID="b6732038fcf46a03cf876ae0b34ca7ab42f703a36bcb90452ee63f49a04f0e5d" exitCode=0 Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.556438 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wwzfw" podStartSLOduration=3.5564202959999998 podStartE2EDuration="3.556420296s" podCreationTimestamp="2025-12-12 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:39.464671668 +0000 UTC m=+1150.340688815" watchObservedRunningTime="2025-12-12 04:52:39.556420296 +0000 UTC m=+1150.432437443" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.605876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686cc94ff9-lggnr" event={"ID":"3f5d7375-11fd-43ae-84a7-13fc0be7f11c","Type":"ContainerStarted","Data":"e92affef2c3dc7883ae5270b50468c156647b8cac5b5ab756125de3a072813fc"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.605912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkmf2" event={"ID":"4273160c-2433-4765-a0fb-70700a3378d9","Type":"ContainerStarted","Data":"f87341611826664cf74a9dff24dbf0d1dd36ecc0f6f0d2058621fbcf8ce02f0d"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.605925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkmf2" event={"ID":"4273160c-2433-4765-a0fb-70700a3378d9","Type":"ContainerStarted","Data":"4877dd59d5bac1b7e58188a836aa0d6182db32c91f220911f59de3962901a7d3"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.605935 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" event={"ID":"5bea106d-1efb-4240-b08e-e9263d39a0ae","Type":"ContainerDied","Data":"b6732038fcf46a03cf876ae0b34ca7ab42f703a36bcb90452ee63f49a04f0e5d"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.605950 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.606710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" event={"ID":"5bea106d-1efb-4240-b08e-e9263d39a0ae","Type":"ContainerStarted","Data":"aa748a8be9a074d6ebd204a7635492de85552454f36b54bc6e7c1f039a659176"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.669832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s672f" event={"ID":"b9082485-1887-4b6d-8e1f-371825f61dfc","Type":"ContainerStarted","Data":"4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.670070 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config" (OuterVolumeSpecName: "config") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.679195 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config\") pod \"9ebb900f-974d-4d6f-84b4-eb01653905c2\" (UID: \"9ebb900f-974d-4d6f-84b4-eb01653905c2\") " Dec 12 04:52:39 crc kubenswrapper[4796]: W1212 04:52:39.680858 4796 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9ebb900f-974d-4d6f-84b4-eb01653905c2/volumes/kubernetes.io~configmap/config Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.680874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config" (OuterVolumeSpecName: "config") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.733338 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerID="5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0" exitCode=0 Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.733392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" event={"ID":"9ebb900f-974d-4d6f-84b4-eb01653905c2","Type":"ContainerDied","Data":"5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.733415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" event={"ID":"9ebb900f-974d-4d6f-84b4-eb01653905c2","Type":"ContainerDied","Data":"6ed84e1783764c0bb96564d2775f3f330328f9622a612882d4979e9bfe8e8153"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.733431 4796 scope.go:117] "RemoveContainer" containerID="5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.733535 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-djdqk" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.756966 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" event={"ID":"beb84d61-a2ac-49bd-9a28-9ffe4095afc8","Type":"ContainerStarted","Data":"4798a5d18cc16b2b4e68de31f40d03a4b0531d03e64412671d7098be41f3d76e"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.776223 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.781635 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.781672 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.781684 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.809317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ft5nd" event={"ID":"32723a76-dbe0-493d-9a87-5c2f46912a71","Type":"ContainerStarted","Data":"4bcf04d5d27a5219d37a00b8c66a7cc83ccd81c84882393378737c794bf59960"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.852014 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.885270 4796 scope.go:117] "RemoveContainer" containerID="87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.886397 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.886416 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.894986 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d574cc9cf-zb7lp" event={"ID":"1aa5f34c-65ff-426f-9752-e88125dc10aa","Type":"ContainerStarted","Data":"0ed8fbb8910f73c0717c0ac4c11a6ac0ffcb87a42459221249d48458b0b3cefd"} Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.942862 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ebb900f-974d-4d6f-84b4-eb01653905c2" (UID: "9ebb900f-974d-4d6f-84b4-eb01653905c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:39 crc kubenswrapper[4796]: I1212 04:52:39.988185 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ebb900f-974d-4d6f-84b4-eb01653905c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.000165 4796 scope.go:117] "RemoveContainer" containerID="5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0" Dec 12 04:52:40 crc kubenswrapper[4796]: E1212 04:52:40.000551 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0\": container with ID starting with 5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0 not found: ID does not exist" containerID="5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.000579 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0"} err="failed to get container status \"5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0\": rpc error: code = NotFound desc = could not find container \"5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0\": container with ID starting with 5d621cce355700941b53cdf4954b8770f34adff553acee76a578cdda96f0fed0 not found: ID does not exist" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.000599 4796 scope.go:117] "RemoveContainer" containerID="87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70" Dec 12 04:52:40 crc kubenswrapper[4796]: E1212 04:52:40.001037 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70\": container with ID starting with 87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70 not found: ID does not exist" containerID="87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.001055 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70"} err="failed to get container status \"87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70\": rpc error: code = NotFound desc = could not find container \"87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70\": container with ID starting with 87ac71e754e9953530fbf671ad002f0ea0a9356db6f957697cacd0c4b569ee70 not found: ID does not exist" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.170898 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.203188 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fkmf2" podStartSLOduration=4.2031717650000004 podStartE2EDuration="4.203171765s" podCreationTimestamp="2025-12-12 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:40.102206818 +0000 UTC m=+1150.978223965" watchObservedRunningTime="2025-12-12 04:52:40.203171765 +0000 UTC m=+1151.079188912" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.248951 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-djdqk"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.271608 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-djdqk"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.294007 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-swift-storage-0\") pod \"5bea106d-1efb-4240-b08e-e9263d39a0ae\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.294197 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj7b2\" (UniqueName: \"kubernetes.io/projected/5bea106d-1efb-4240-b08e-e9263d39a0ae-kube-api-access-zj7b2\") pod \"5bea106d-1efb-4240-b08e-e9263d39a0ae\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.294227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-svc\") pod \"5bea106d-1efb-4240-b08e-e9263d39a0ae\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.294249 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-nb\") pod \"5bea106d-1efb-4240-b08e-e9263d39a0ae\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.294306 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-config\") pod \"5bea106d-1efb-4240-b08e-e9263d39a0ae\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.294373 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-sb\") pod \"5bea106d-1efb-4240-b08e-e9263d39a0ae\" (UID: \"5bea106d-1efb-4240-b08e-e9263d39a0ae\") " Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.426521 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.440773 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bea106d-1efb-4240-b08e-e9263d39a0ae" (UID: "5bea106d-1efb-4240-b08e-e9263d39a0ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.472816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bea106d-1efb-4240-b08e-e9263d39a0ae-kube-api-access-zj7b2" (OuterVolumeSpecName: "kube-api-access-zj7b2") pod "5bea106d-1efb-4240-b08e-e9263d39a0ae" (UID: "5bea106d-1efb-4240-b08e-e9263d39a0ae"). InnerVolumeSpecName "kube-api-access-zj7b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.483854 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5bea106d-1efb-4240-b08e-e9263d39a0ae" (UID: "5bea106d-1efb-4240-b08e-e9263d39a0ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.502510 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj7b2\" (UniqueName: \"kubernetes.io/projected/5bea106d-1efb-4240-b08e-e9263d39a0ae-kube-api-access-zj7b2\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.502536 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.502546 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.521797 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.540900 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5bea106d-1efb-4240-b08e-e9263d39a0ae" (UID: "5bea106d-1efb-4240-b08e-e9263d39a0ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.558938 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-config" (OuterVolumeSpecName: "config") pod "5bea106d-1efb-4240-b08e-e9263d39a0ae" (UID: "5bea106d-1efb-4240-b08e-e9263d39a0ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.576485 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5bea106d-1efb-4240-b08e-e9263d39a0ae" (UID: "5bea106d-1efb-4240-b08e-e9263d39a0ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.578295 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-686cc94ff9-lggnr"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.593623 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f5d564649-v799z"] Dec 12 04:52:40 crc kubenswrapper[4796]: E1212 04:52:40.594076 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerName="dnsmasq-dns" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.594101 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerName="dnsmasq-dns" Dec 12 04:52:40 crc kubenswrapper[4796]: E1212 04:52:40.594125 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerName="init" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.594132 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerName="init" Dec 12 04:52:40 crc kubenswrapper[4796]: E1212 04:52:40.594147 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bea106d-1efb-4240-b08e-e9263d39a0ae" containerName="init" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.594155 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bea106d-1efb-4240-b08e-e9263d39a0ae" containerName="init" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.594386 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" containerName="dnsmasq-dns" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.594413 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bea106d-1efb-4240-b08e-e9263d39a0ae" containerName="init" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.599171 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605266 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-scripts\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhqm\" (UniqueName: \"kubernetes.io/projected/74b8adc6-3f38-4bb0-92bb-3ba777872a01-kube-api-access-5nhqm\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605377 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b8adc6-3f38-4bb0-92bb-3ba777872a01-horizon-secret-key\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-config-data\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605529 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b8adc6-3f38-4bb0-92bb-3ba777872a01-logs\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605614 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605624 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.605634 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bea106d-1efb-4240-b08e-e9263d39a0ae-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.616018 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.629251 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f5d564649-v799z"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.699122 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.706210 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-scripts\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.706293 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhqm\" (UniqueName: \"kubernetes.io/projected/74b8adc6-3f38-4bb0-92bb-3ba777872a01-kube-api-access-5nhqm\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.706325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b8adc6-3f38-4bb0-92bb-3ba777872a01-horizon-secret-key\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.706363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-config-data\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.706418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b8adc6-3f38-4bb0-92bb-3ba777872a01-logs\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.707021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-scripts\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.709527 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b8adc6-3f38-4bb0-92bb-3ba777872a01-logs\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.713591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-config-data\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.725348 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b8adc6-3f38-4bb0-92bb-3ba777872a01-horizon-secret-key\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.733360 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhqm\" (UniqueName: \"kubernetes.io/projected/74b8adc6-3f38-4bb0-92bb-3ba777872a01-kube-api-access-5nhqm\") pod \"horizon-f5d564649-v799z\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.945902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c16aa3d8-e979-4370-bda3-22d68070a7ff","Type":"ContainerStarted","Data":"262bd0a6ff80c3dedf09497bf716b58c407335fc693063e4aaf136e35b690b1b"} Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.961458 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"792c112e-eac1-4ede-a03d-5871e4679e17","Type":"ContainerStarted","Data":"f4b71ae8c5fd41cb989cade91f7d5ebb3e41573924e106a94873ea1f5ee723da"} Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.963007 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f5d564649-v799z" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.964232 4796 generic.go:334] "Generic (PLEG): container finished" podID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerID="64f11ff8b3692472b577a6d5efcc3933c68f0b50495436a802445c0734039c52" exitCode=0 Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.964339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" event={"ID":"beb84d61-a2ac-49bd-9a28-9ffe4095afc8","Type":"ContainerDied","Data":"64f11ff8b3692472b577a6d5efcc3933c68f0b50495436a802445c0734039c52"} Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.971383 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.971795 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-d4ffk" event={"ID":"5bea106d-1efb-4240-b08e-e9263d39a0ae","Type":"ContainerDied","Data":"aa748a8be9a074d6ebd204a7635492de85552454f36b54bc6e7c1f039a659176"} Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.971837 4796 scope.go:117] "RemoveContainer" containerID="b6732038fcf46a03cf876ae0b34ca7ab42f703a36bcb90452ee63f49a04f0e5d" Dec 12 04:52:40 crc kubenswrapper[4796]: I1212 04:52:40.988113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38281847-cdff-430d-a6f3-d029c2032974","Type":"ContainerStarted","Data":"50832d90bf506163461101b0ccfae955e384a955a2227ca8fae72e5d05360013"} Dec 12 04:52:41 crc kubenswrapper[4796]: I1212 04:52:41.026489 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d4ffk"] Dec 12 04:52:41 crc kubenswrapper[4796]: I1212 04:52:41.040715 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d4ffk"] Dec 12 04:52:41 crc kubenswrapper[4796]: I1212 04:52:41.439228 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bea106d-1efb-4240-b08e-e9263d39a0ae" path="/var/lib/kubelet/pods/5bea106d-1efb-4240-b08e-e9263d39a0ae/volumes" Dec 12 04:52:41 crc kubenswrapper[4796]: I1212 04:52:41.440408 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ebb900f-974d-4d6f-84b4-eb01653905c2" path="/var/lib/kubelet/pods/9ebb900f-974d-4d6f-84b4-eb01653905c2/volumes" Dec 12 04:52:41 crc kubenswrapper[4796]: I1212 04:52:41.658368 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f5d564649-v799z"] Dec 12 04:52:41 crc kubenswrapper[4796]: W1212 04:52:41.671114 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b8adc6_3f38_4bb0_92bb_3ba777872a01.slice/crio-efbd7a3eaf188f43ef4634860ab8643fae07b35565c214ee85ec117a0abce39f WatchSource:0}: Error finding container efbd7a3eaf188f43ef4634860ab8643fae07b35565c214ee85ec117a0abce39f: Status 404 returned error can't find the container with id efbd7a3eaf188f43ef4634860ab8643fae07b35565c214ee85ec117a0abce39f Dec 12 04:52:42 crc kubenswrapper[4796]: I1212 04:52:42.028953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" event={"ID":"beb84d61-a2ac-49bd-9a28-9ffe4095afc8","Type":"ContainerStarted","Data":"5262bb061517de6e98754b1e0b8d11c251dde9d14e35ed5df69c798b7b2a976e"} Dec 12 04:52:42 crc kubenswrapper[4796]: I1212 04:52:42.029416 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:42 crc kubenswrapper[4796]: I1212 04:52:42.040013 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f5d564649-v799z" event={"ID":"74b8adc6-3f38-4bb0-92bb-3ba777872a01","Type":"ContainerStarted","Data":"efbd7a3eaf188f43ef4634860ab8643fae07b35565c214ee85ec117a0abce39f"} Dec 12 04:52:42 crc kubenswrapper[4796]: I1212 04:52:42.073231 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" podStartSLOduration=5.073212049 podStartE2EDuration="5.073212049s" podCreationTimestamp="2025-12-12 04:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:42.064784675 +0000 UTC m=+1152.940801822" watchObservedRunningTime="2025-12-12 04:52:42.073212049 +0000 UTC m=+1152.949229196" Dec 12 04:52:43 crc kubenswrapper[4796]: I1212 04:52:43.055117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38281847-cdff-430d-a6f3-d029c2032974","Type":"ContainerStarted","Data":"b9a3324d60dda2e0f722bb53799fff8167db9b5713b65d17866fe19647f439f7"} Dec 12 04:52:43 crc kubenswrapper[4796]: I1212 04:52:43.062813 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"792c112e-eac1-4ede-a03d-5871e4679e17","Type":"ContainerStarted","Data":"30b67842505800f34f385111cb897bc915649abfdb83a7a6bdd1486a113a2c89"} Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.075324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38281847-cdff-430d-a6f3-d029c2032974","Type":"ContainerStarted","Data":"a9264352b691a9aa0c7721639f73745589a772184a1151678984750ee1707445"} Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.075688 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-log" containerID="cri-o://b9a3324d60dda2e0f722bb53799fff8167db9b5713b65d17866fe19647f439f7" gracePeriod=30 Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.075822 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-httpd" containerID="cri-o://a9264352b691a9aa0c7721639f73745589a772184a1151678984750ee1707445" gracePeriod=30 Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.081503 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"792c112e-eac1-4ede-a03d-5871e4679e17","Type":"ContainerStarted","Data":"220461d7957c578661bb83062d969b31d63083b73e8b1b23a142338dd137314a"} Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.081631 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-log" containerID="cri-o://30b67842505800f34f385111cb897bc915649abfdb83a7a6bdd1486a113a2c89" gracePeriod=30 Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.081746 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-httpd" containerID="cri-o://220461d7957c578661bb83062d969b31d63083b73e8b1b23a142338dd137314a" gracePeriod=30 Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.184989 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.184966006 podStartE2EDuration="7.184966006s" podCreationTimestamp="2025-12-12 04:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:44.167339393 +0000 UTC m=+1155.043356560" watchObservedRunningTime="2025-12-12 04:52:44.184966006 +0000 UTC m=+1155.060983153" Dec 12 04:52:44 crc kubenswrapper[4796]: I1212 04:52:44.185926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.185920696 podStartE2EDuration="7.185920696s" podCreationTimestamp="2025-12-12 04:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:52:44.12551199 +0000 UTC m=+1155.001529137" watchObservedRunningTime="2025-12-12 04:52:44.185920696 +0000 UTC m=+1155.061937843" Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.099826 4796 generic.go:334] "Generic (PLEG): container finished" podID="38281847-cdff-430d-a6f3-d029c2032974" containerID="a9264352b691a9aa0c7721639f73745589a772184a1151678984750ee1707445" exitCode=0 Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.101126 4796 generic.go:334] "Generic (PLEG): container finished" podID="38281847-cdff-430d-a6f3-d029c2032974" containerID="b9a3324d60dda2e0f722bb53799fff8167db9b5713b65d17866fe19647f439f7" exitCode=143 Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.099985 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38281847-cdff-430d-a6f3-d029c2032974","Type":"ContainerDied","Data":"a9264352b691a9aa0c7721639f73745589a772184a1151678984750ee1707445"} Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.101212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38281847-cdff-430d-a6f3-d029c2032974","Type":"ContainerDied","Data":"b9a3324d60dda2e0f722bb53799fff8167db9b5713b65d17866fe19647f439f7"} Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.105461 4796 generic.go:334] "Generic (PLEG): container finished" podID="792c112e-eac1-4ede-a03d-5871e4679e17" containerID="220461d7957c578661bb83062d969b31d63083b73e8b1b23a142338dd137314a" exitCode=0 Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.105491 4796 generic.go:334] "Generic (PLEG): container finished" podID="792c112e-eac1-4ede-a03d-5871e4679e17" containerID="30b67842505800f34f385111cb897bc915649abfdb83a7a6bdd1486a113a2c89" exitCode=143 Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.105504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"792c112e-eac1-4ede-a03d-5871e4679e17","Type":"ContainerDied","Data":"220461d7957c578661bb83062d969b31d63083b73e8b1b23a142338dd137314a"} Dec 12 04:52:45 crc kubenswrapper[4796]: I1212 04:52:45.105552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"792c112e-eac1-4ede-a03d-5871e4679e17","Type":"ContainerDied","Data":"30b67842505800f34f385111cb897bc915649abfdb83a7a6bdd1486a113a2c89"} Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.120520 4796 generic.go:334] "Generic (PLEG): container finished" podID="4273160c-2433-4765-a0fb-70700a3378d9" containerID="f87341611826664cf74a9dff24dbf0d1dd36ecc0f6f0d2058621fbcf8ce02f0d" exitCode=0 Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.120592 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkmf2" event={"ID":"4273160c-2433-4765-a0fb-70700a3378d9","Type":"ContainerDied","Data":"f87341611826664cf74a9dff24dbf0d1dd36ecc0f6f0d2058621fbcf8ce02f0d"} Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.415565 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d574cc9cf-zb7lp"] Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.462728 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67764d6b9b-h7fdk"] Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.464840 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.485994 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.504698 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67764d6b9b-h7fdk"] Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.566431 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f5d564649-v799z"] Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.650943 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-tls-certs\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.658867 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cb55bccb4-z8p6q"] Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.660355 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.660905 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-scripts\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.661150 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-logs\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.661269 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqhc\" (UniqueName: \"kubernetes.io/projected/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-kube-api-access-xwqhc\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.661349 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-config-data\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.661409 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-secret-key\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.661453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-combined-ca-bundle\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.744375 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb55bccb4-z8p6q"] Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765406 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7913672c-384c-472c-89a8-0d546f345a28-logs\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765476 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-horizon-secret-key\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765511 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-horizon-tls-certs\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7913672c-384c-472c-89a8-0d546f345a28-scripts\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765570 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-logs\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj92f\" (UniqueName: \"kubernetes.io/projected/7913672c-384c-472c-89a8-0d546f345a28-kube-api-access-zj92f\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765653 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwqhc\" (UniqueName: \"kubernetes.io/projected/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-kube-api-access-xwqhc\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765679 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-config-data\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765701 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-secret-key\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-combined-ca-bundle\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765736 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7913672c-384c-472c-89a8-0d546f345a28-config-data\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-tls-certs\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765787 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-combined-ca-bundle\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.765829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-scripts\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.766517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-scripts\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.767123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-logs\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.767454 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-config-data\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.773666 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-secret-key\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.775009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-tls-certs\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.791999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-combined-ca-bundle\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.798983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwqhc\" (UniqueName: \"kubernetes.io/projected/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-kube-api-access-xwqhc\") pod \"horizon-67764d6b9b-h7fdk\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867635 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj92f\" (UniqueName: \"kubernetes.io/projected/7913672c-384c-472c-89a8-0d546f345a28-kube-api-access-zj92f\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867734 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7913672c-384c-472c-89a8-0d546f345a28-config-data\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867780 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-combined-ca-bundle\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867856 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7913672c-384c-472c-89a8-0d546f345a28-logs\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-horizon-secret-key\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867913 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-horizon-tls-certs\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.867950 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7913672c-384c-472c-89a8-0d546f345a28-scripts\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.868751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7913672c-384c-472c-89a8-0d546f345a28-scripts\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.869099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7913672c-384c-472c-89a8-0d546f345a28-logs\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.869514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7913672c-384c-472c-89a8-0d546f345a28-config-data\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.872799 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-horizon-secret-key\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.878075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-horizon-tls-certs\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.881122 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7913672c-384c-472c-89a8-0d546f345a28-combined-ca-bundle\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:46 crc kubenswrapper[4796]: I1212 04:52:46.899567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj92f\" (UniqueName: \"kubernetes.io/projected/7913672c-384c-472c-89a8-0d546f345a28-kube-api-access-zj92f\") pod \"horizon-6cb55bccb4-z8p6q\" (UID: \"7913672c-384c-472c-89a8-0d546f345a28\") " pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:47 crc kubenswrapper[4796]: I1212 04:52:47.017773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:52:47 crc kubenswrapper[4796]: I1212 04:52:47.093930 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:52:48 crc kubenswrapper[4796]: I1212 04:52:48.141456 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:52:48 crc kubenswrapper[4796]: I1212 04:52:48.218455 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hcbxs"] Dec 12 04:52:48 crc kubenswrapper[4796]: I1212 04:52:48.218872 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" containerID="cri-o://c50194b2a3c997e56a9df9912a8060d2ba5dd33d577e0e56f87de3b069b4e284" gracePeriod=10 Dec 12 04:52:48 crc kubenswrapper[4796]: I1212 04:52:48.787147 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Dec 12 04:52:49 crc kubenswrapper[4796]: I1212 04:52:49.163789 4796 generic.go:334] "Generic (PLEG): container finished" podID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerID="c50194b2a3c997e56a9df9912a8060d2ba5dd33d577e0e56f87de3b069b4e284" exitCode=0 Dec 12 04:52:49 crc kubenswrapper[4796]: I1212 04:52:49.163828 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" event={"ID":"ff3d9df9-a1b6-4474-9614-4d7535353752","Type":"ContainerDied","Data":"c50194b2a3c997e56a9df9912a8060d2ba5dd33d577e0e56f87de3b069b4e284"} Dec 12 04:52:53 crc kubenswrapper[4796]: I1212 04:52:53.725030 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Dec 12 04:52:56 crc kubenswrapper[4796]: E1212 04:52:56.151655 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 12 04:52:56 crc kubenswrapper[4796]: E1212 04:52:56.152351 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6567,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-s672f_openstack(b9082485-1887-4b6d-8e1f-371825f61dfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:52:56 crc kubenswrapper[4796]: E1212 04:52:56.153576 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-s672f" podUID="b9082485-1887-4b6d-8e1f-371825f61dfc" Dec 12 04:52:56 crc kubenswrapper[4796]: E1212 04:52:56.242849 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-s672f" podUID="b9082485-1887-4b6d-8e1f-371825f61dfc" Dec 12 04:52:57 crc kubenswrapper[4796]: E1212 04:52:57.997413 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 12 04:52:57 crc kubenswrapper[4796]: E1212 04:52:57.998580 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cds9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-ft5nd_openstack(32723a76-dbe0-493d-9a87-5c2f46912a71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:52:57 crc kubenswrapper[4796]: E1212 04:52:57.999755 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-ft5nd" podUID="32723a76-dbe0-493d-9a87-5c2f46912a71" Dec 12 04:52:58 crc kubenswrapper[4796]: E1212 04:52:58.256880 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-ft5nd" podUID="32723a76-dbe0-493d-9a87-5c2f46912a71" Dec 12 04:52:58 crc kubenswrapper[4796]: I1212 04:52:58.724938 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Dec 12 04:52:58 crc kubenswrapper[4796]: I1212 04:52:58.725313 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:52:59 crc kubenswrapper[4796]: I1212 04:52:59.964589 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:52:59 crc kubenswrapper[4796]: I1212 04:52:59.977017 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.123576 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89ls6\" (UniqueName: \"kubernetes.io/projected/4273160c-2433-4765-a0fb-70700a3378d9-kube-api-access-89ls6\") pod \"4273160c-2433-4765-a0fb-70700a3378d9\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.123662 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-combined-ca-bundle\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.123691 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-fernet-keys\") pod \"4273160c-2433-4765-a0fb-70700a3378d9\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww5mj\" (UniqueName: \"kubernetes.io/projected/38281847-cdff-430d-a6f3-d029c2032974-kube-api-access-ww5mj\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124332 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-internal-tls-certs\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124371 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-scripts\") pod \"4273160c-2433-4765-a0fb-70700a3378d9\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-scripts\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124444 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-logs\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124510 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-config-data\") pod \"4273160c-2433-4765-a0fb-70700a3378d9\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124547 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-credential-keys\") pod \"4273160c-2433-4765-a0fb-70700a3378d9\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-combined-ca-bundle\") pod \"4273160c-2433-4765-a0fb-70700a3378d9\" (UID: \"4273160c-2433-4765-a0fb-70700a3378d9\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.124594 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-httpd-run\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.129082 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-logs" (OuterVolumeSpecName: "logs") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.131722 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.138163 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-scripts" (OuterVolumeSpecName: "scripts") pod "4273160c-2433-4765-a0fb-70700a3378d9" (UID: "4273160c-2433-4765-a0fb-70700a3378d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.144274 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4273160c-2433-4765-a0fb-70700a3378d9-kube-api-access-89ls6" (OuterVolumeSpecName: "kube-api-access-89ls6") pod "4273160c-2433-4765-a0fb-70700a3378d9" (UID: "4273160c-2433-4765-a0fb-70700a3378d9"). InnerVolumeSpecName "kube-api-access-89ls6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.145821 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.146679 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-scripts" (OuterVolumeSpecName: "scripts") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.168387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.168410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4273160c-2433-4765-a0fb-70700a3378d9" (UID: "4273160c-2433-4765-a0fb-70700a3378d9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.169004 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4273160c-2433-4765-a0fb-70700a3378d9" (UID: "4273160c-2433-4765-a0fb-70700a3378d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.197915 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38281847-cdff-430d-a6f3-d029c2032974-kube-api-access-ww5mj" (OuterVolumeSpecName: "kube-api-access-ww5mj") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "kube-api-access-ww5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.209547 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4273160c-2433-4765-a0fb-70700a3378d9" (UID: "4273160c-2433-4765-a0fb-70700a3378d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.216966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233234 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89ls6\" (UniqueName: \"kubernetes.io/projected/4273160c-2433-4765-a0fb-70700a3378d9-kube-api-access-89ls6\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233301 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233311 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233320 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww5mj\" (UniqueName: \"kubernetes.io/projected/38281847-cdff-430d-a6f3-d029c2032974-kube-api-access-ww5mj\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233328 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233336 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233355 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233383 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233393 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.233401 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38281847-cdff-430d-a6f3-d029c2032974-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.260964 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.290719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-config-data" (OuterVolumeSpecName: "config-data") pod "4273160c-2433-4765-a0fb-70700a3378d9" (UID: "4273160c-2433-4765-a0fb-70700a3378d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.297795 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkmf2" event={"ID":"4273160c-2433-4765-a0fb-70700a3378d9","Type":"ContainerDied","Data":"4877dd59d5bac1b7e58188a836aa0d6182db32c91f220911f59de3962901a7d3"} Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.297875 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4877dd59d5bac1b7e58188a836aa0d6182db32c91f220911f59de3962901a7d3" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.298049 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkmf2" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.313744 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.330118 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.330006 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38281847-cdff-430d-a6f3-d029c2032974","Type":"ContainerDied","Data":"50832d90bf506163461101b0ccfae955e384a955a2227ca8fae72e5d05360013"} Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.330218 4796 scope.go:117] "RemoveContainer" containerID="a9264352b691a9aa0c7721639f73745589a772184a1151678984750ee1707445" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.334063 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data" (OuterVolumeSpecName: "config-data") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.336069 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data\") pod \"38281847-cdff-430d-a6f3-d029c2032974\" (UID: \"38281847-cdff-430d-a6f3-d029c2032974\") " Dec 12 04:53:00 crc kubenswrapper[4796]: W1212 04:53:00.336243 4796 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/38281847-cdff-430d-a6f3-d029c2032974/volumes/kubernetes.io~secret/config-data Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.336258 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data" (OuterVolumeSpecName: "config-data") pod "38281847-cdff-430d-a6f3-d029c2032974" (UID: "38281847-cdff-430d-a6f3-d029c2032974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.337538 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.337559 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.337568 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4273160c-2433-4765-a0fb-70700a3378d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.337576 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38281847-cdff-430d-a6f3-d029c2032974-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.680857 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.693220 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.703442 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:53:00 crc kubenswrapper[4796]: E1212 04:53:00.703880 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4273160c-2433-4765-a0fb-70700a3378d9" containerName="keystone-bootstrap" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.703900 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4273160c-2433-4765-a0fb-70700a3378d9" containerName="keystone-bootstrap" Dec 12 04:53:00 crc kubenswrapper[4796]: E1212 04:53:00.703938 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-log" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.703946 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-log" Dec 12 04:53:00 crc kubenswrapper[4796]: E1212 04:53:00.703966 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-httpd" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.703974 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-httpd" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.704196 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-log" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.704236 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="38281847-cdff-430d-a6f3-d029c2032974" containerName="glance-httpd" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.704253 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4273160c-2433-4765-a0fb-70700a3378d9" containerName="keystone-bootstrap" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.705472 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.713906 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.714015 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.719736 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.854803 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-logs\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.854997 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.855051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.855092 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.855187 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c58v9\" (UniqueName: \"kubernetes.io/projected/d70c6209-42c5-47d3-9d1a-156d5c7a6317-kube-api-access-c58v9\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.855366 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.855402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.855431 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.956963 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957078 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c58v9\" (UniqueName: \"kubernetes.io/projected/d70c6209-42c5-47d3-9d1a-156d5c7a6317-kube-api-access-c58v9\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957165 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-logs\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957305 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.957326 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.958652 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.959063 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.959749 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-logs\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.964838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.965702 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.967053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.969891 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:00 crc kubenswrapper[4796]: I1212 04:53:00.984754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c58v9\" (UniqueName: \"kubernetes.io/projected/d70c6209-42c5-47d3-9d1a-156d5c7a6317-kube-api-access-c58v9\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.032670 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.113079 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fkmf2"] Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.122388 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fkmf2"] Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.181958 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s2zgk"] Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.183058 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.185351 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7r2qv" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.187324 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.189437 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.190262 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.194625 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s2zgk"] Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.197164 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.266272 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-config-data\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.266426 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-fernet-keys\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.266476 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-credential-keys\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.266507 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-combined-ca-bundle\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.266587 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-scripts\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.266717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcp4t\" (UniqueName: \"kubernetes.io/projected/11a294f9-a8f1-47e6-a551-8a47f1751c39-kube-api-access-kcp4t\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.335369 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.368863 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-fernet-keys\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.369291 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-credential-keys\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.369329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-combined-ca-bundle\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.369350 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-scripts\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.369378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp4t\" (UniqueName: \"kubernetes.io/projected/11a294f9-a8f1-47e6-a551-8a47f1751c39-kube-api-access-kcp4t\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.369432 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-config-data\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.373902 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-combined-ca-bundle\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.374128 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-scripts\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.374493 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-fernet-keys\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.378255 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-credential-keys\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.389688 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-config-data\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.400014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcp4t\" (UniqueName: \"kubernetes.io/projected/11a294f9-a8f1-47e6-a551-8a47f1751c39-kube-api-access-kcp4t\") pod \"keystone-bootstrap-s2zgk\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.431433 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38281847-cdff-430d-a6f3-d029c2032974" path="/var/lib/kubelet/pods/38281847-cdff-430d-a6f3-d029c2032974/volumes" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.432916 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4273160c-2433-4765-a0fb-70700a3378d9" path="/var/lib/kubelet/pods/4273160c-2433-4765-a0fb-70700a3378d9/volumes" Dec 12 04:53:01 crc kubenswrapper[4796]: I1212 04:53:01.501823 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:07 crc kubenswrapper[4796]: I1212 04:53:07.393097 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c0b2dfd-d78a-45ba-aac1-fab7457e322c" containerID="267dc094d0f17957dc3d7616911386fbd4df0d13a6a85376914267685b0644ee" exitCode=0 Dec 12 04:53:07 crc kubenswrapper[4796]: I1212 04:53:07.393189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wwzfw" event={"ID":"9c0b2dfd-d78a-45ba-aac1-fab7457e322c","Type":"ContainerDied","Data":"267dc094d0f17957dc3d7616911386fbd4df0d13a6a85376914267685b0644ee"} Dec 12 04:53:07 crc kubenswrapper[4796]: I1212 04:53:07.986639 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 04:53:07 crc kubenswrapper[4796]: I1212 04:53:07.987007 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 04:53:08 crc kubenswrapper[4796]: I1212 04:53:08.725189 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.585930 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739366 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-config-data\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739461 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcdl\" (UniqueName: \"kubernetes.io/projected/792c112e-eac1-4ede-a03d-5871e4679e17-kube-api-access-6mcdl\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739483 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-scripts\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739580 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-public-tls-certs\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739605 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-httpd-run\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739678 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739703 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-combined-ca-bundle\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.739756 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-logs\") pod \"792c112e-eac1-4ede-a03d-5871e4679e17\" (UID: \"792c112e-eac1-4ede-a03d-5871e4679e17\") " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.741341 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.741540 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-logs" (OuterVolumeSpecName: "logs") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.744990 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-scripts" (OuterVolumeSpecName: "scripts") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.746661 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.758480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792c112e-eac1-4ede-a03d-5871e4679e17-kube-api-access-6mcdl" (OuterVolumeSpecName: "kube-api-access-6mcdl") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "kube-api-access-6mcdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.778960 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.852499 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-config-data" (OuterVolumeSpecName: "config-data") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853822 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853860 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcdl\" (UniqueName: \"kubernetes.io/projected/792c112e-eac1-4ede-a03d-5871e4679e17-kube-api-access-6mcdl\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853875 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853884 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853915 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853970 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.853984 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/792c112e-eac1-4ede-a03d-5871e4679e17-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.856251 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "792c112e-eac1-4ede-a03d-5871e4679e17" (UID: "792c112e-eac1-4ede-a03d-5871e4679e17"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.898812 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.956082 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/792c112e-eac1-4ede-a03d-5871e4679e17-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:09 crc kubenswrapper[4796]: I1212 04:53:09.956123 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.418357 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"792c112e-eac1-4ede-a03d-5871e4679e17","Type":"ContainerDied","Data":"f4b71ae8c5fd41cb989cade91f7d5ebb3e41573924e106a94873ea1f5ee723da"} Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.418633 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.453900 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.463404 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.474539 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:53:10 crc kubenswrapper[4796]: E1212 04:53:10.475179 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-httpd" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.475201 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-httpd" Dec 12 04:53:10 crc kubenswrapper[4796]: E1212 04:53:10.475238 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-log" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.475248 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-log" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.475619 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-log" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.475733 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" containerName="glance-httpd" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.477046 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.482606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.495961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.501781 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.565780 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.565824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.565869 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.565918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.565937 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.566053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-logs\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.566095 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4wn\" (UniqueName: \"kubernetes.io/projected/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-kube-api-access-6t4wn\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.566155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667728 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-logs\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.668229 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-logs\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.667780 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4wn\" (UniqueName: \"kubernetes.io/projected/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-kube-api-access-6t4wn\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.668326 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.668518 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.668540 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.671699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.672038 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.677509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.677677 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.683481 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t4wn\" (UniqueName: \"kubernetes.io/projected/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-kube-api-access-6t4wn\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.707623 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " pod="openstack/glance-default-external-api-0" Dec 12 04:53:10 crc kubenswrapper[4796]: I1212 04:53:10.791535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:53:11 crc kubenswrapper[4796]: E1212 04:53:11.195326 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 12 04:53:11 crc kubenswrapper[4796]: E1212 04:53:11.195520 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9ch674h54bh5bh594h8dh89h58dh5f6h94h586h645h5b5h654hc9h7dh588h5dfh579hbdh578h55fhd8h649h588h549h5f6h59dh647h5f4h85h54q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ktwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c16aa3d8-e979-4370-bda3-22d68070a7ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.294927 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.303442 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381670 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-sb\") pod \"ff3d9df9-a1b6-4474-9614-4d7535353752\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381728 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-dns-svc\") pod \"ff3d9df9-a1b6-4474-9614-4d7535353752\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381767 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-config\") pod \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381794 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89hbf\" (UniqueName: \"kubernetes.io/projected/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-kube-api-access-89hbf\") pod \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381833 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-config\") pod \"ff3d9df9-a1b6-4474-9614-4d7535353752\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-combined-ca-bundle\") pod \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\" (UID: \"9c0b2dfd-d78a-45ba-aac1-fab7457e322c\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381907 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x6sf\" (UniqueName: \"kubernetes.io/projected/ff3d9df9-a1b6-4474-9614-4d7535353752-kube-api-access-6x6sf\") pod \"ff3d9df9-a1b6-4474-9614-4d7535353752\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.381936 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-nb\") pod \"ff3d9df9-a1b6-4474-9614-4d7535353752\" (UID: \"ff3d9df9-a1b6-4474-9614-4d7535353752\") " Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.391462 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-kube-api-access-89hbf" (OuterVolumeSpecName: "kube-api-access-89hbf") pod "9c0b2dfd-d78a-45ba-aac1-fab7457e322c" (UID: "9c0b2dfd-d78a-45ba-aac1-fab7457e322c"). InnerVolumeSpecName "kube-api-access-89hbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.413846 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff3d9df9-a1b6-4474-9614-4d7535353752-kube-api-access-6x6sf" (OuterVolumeSpecName: "kube-api-access-6x6sf") pod "ff3d9df9-a1b6-4474-9614-4d7535353752" (UID: "ff3d9df9-a1b6-4474-9614-4d7535353752"). InnerVolumeSpecName "kube-api-access-6x6sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.426898 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792c112e-eac1-4ede-a03d-5871e4679e17" path="/var/lib/kubelet/pods/792c112e-eac1-4ede-a03d-5871e4679e17/volumes" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.432590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-config" (OuterVolumeSpecName: "config") pod "9c0b2dfd-d78a-45ba-aac1-fab7457e322c" (UID: "9c0b2dfd-d78a-45ba-aac1-fab7457e322c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.461229 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff3d9df9-a1b6-4474-9614-4d7535353752" (UID: "ff3d9df9-a1b6-4474-9614-4d7535353752"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.462180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" event={"ID":"ff3d9df9-a1b6-4474-9614-4d7535353752","Type":"ContainerDied","Data":"08020f41c9b3d74b6d6a4dd8b33f4c26457687397fe6180ec6c97196919bfccb"} Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.462311 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.469612 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wwzfw" event={"ID":"9c0b2dfd-d78a-45ba-aac1-fab7457e322c","Type":"ContainerDied","Data":"6f063ec95f0f35d9654b976064cdefd610c626baecca0906221330e4032e7637"} Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.469656 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f063ec95f0f35d9654b976064cdefd610c626baecca0906221330e4032e7637" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.469718 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wwzfw" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.477471 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c0b2dfd-d78a-45ba-aac1-fab7457e322c" (UID: "9c0b2dfd-d78a-45ba-aac1-fab7457e322c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.492979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff3d9df9-a1b6-4474-9614-4d7535353752" (UID: "ff3d9df9-a1b6-4474-9614-4d7535353752"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.494066 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.494087 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.494097 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.494106 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89hbf\" (UniqueName: \"kubernetes.io/projected/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-kube-api-access-89hbf\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.494116 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2dfd-d78a-45ba-aac1-fab7457e322c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.494126 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x6sf\" (UniqueName: \"kubernetes.io/projected/ff3d9df9-a1b6-4474-9614-4d7535353752-kube-api-access-6x6sf\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.518817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff3d9df9-a1b6-4474-9614-4d7535353752" (UID: "ff3d9df9-a1b6-4474-9614-4d7535353752"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.527464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-config" (OuterVolumeSpecName: "config") pod "ff3d9df9-a1b6-4474-9614-4d7535353752" (UID: "ff3d9df9-a1b6-4474-9614-4d7535353752"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.597170 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.597215 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3d9df9-a1b6-4474-9614-4d7535353752-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.812874 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hcbxs"] Dec 12 04:53:11 crc kubenswrapper[4796]: I1212 04:53:11.837198 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hcbxs"] Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.602897 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-s242x"] Dec 12 04:53:12 crc kubenswrapper[4796]: E1212 04:53:12.604204 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0b2dfd-d78a-45ba-aac1-fab7457e322c" containerName="neutron-db-sync" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.604223 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0b2dfd-d78a-45ba-aac1-fab7457e322c" containerName="neutron-db-sync" Dec 12 04:53:12 crc kubenswrapper[4796]: E1212 04:53:12.604236 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.604243 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" Dec 12 04:53:12 crc kubenswrapper[4796]: E1212 04:53:12.604384 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="init" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.604394 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="init" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.604566 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.604581 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0b2dfd-d78a-45ba-aac1-fab7457e322c" containerName="neutron-db-sync" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.608308 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.643348 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-s242x"] Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.728811 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.728894 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.728919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-svc\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.728988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-config\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.729013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtpb\" (UniqueName: \"kubernetes.io/projected/67ffc83e-cc88-4963-9302-fa6c816ce4c4-kube-api-access-4vtpb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.729033 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.765236 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68c4f9dc76-n9c9p"] Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.767266 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.773774 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.774098 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.774136 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.774626 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vg8z6" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.802129 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68c4f9dc76-n9c9p"] Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.833611 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vtpb\" (UniqueName: \"kubernetes.io/projected/67ffc83e-cc88-4963-9302-fa6c816ce4c4-kube-api-access-4vtpb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.833676 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.834141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.835141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.835157 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.835242 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.835320 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-svc\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.835537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-config\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.838161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.840205 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-svc\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.842041 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-config\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.882141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vtpb\" (UniqueName: \"kubernetes.io/projected/67ffc83e-cc88-4963-9302-fa6c816ce4c4-kube-api-access-4vtpb\") pod \"dnsmasq-dns-6b7b667979-s242x\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.937259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-httpd-config\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.937377 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqngl\" (UniqueName: \"kubernetes.io/projected/6c750273-c3b9-46b0-b884-422d779e73e3-kube-api-access-cqngl\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.937407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-config\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.937427 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-combined-ca-bundle\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.937763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-ovndb-tls-certs\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:12 crc kubenswrapper[4796]: I1212 04:53:12.955616 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.039944 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-ovndb-tls-certs\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.039996 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-httpd-config\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.040052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqngl\" (UniqueName: \"kubernetes.io/projected/6c750273-c3b9-46b0-b884-422d779e73e3-kube-api-access-cqngl\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.040075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-config\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.040098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-combined-ca-bundle\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.044794 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-combined-ca-bundle\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.045412 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-ovndb-tls-certs\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.061233 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-httpd-config\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.062181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-config\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.092049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqngl\" (UniqueName: \"kubernetes.io/projected/6c750273-c3b9-46b0-b884-422d779e73e3-kube-api-access-cqngl\") pod \"neutron-68c4f9dc76-n9c9p\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.116707 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.423465 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" path="/var/lib/kubelet/pods/ff3d9df9-a1b6-4474-9614-4d7535353752/volumes" Dec 12 04:53:13 crc kubenswrapper[4796]: E1212 04:53:13.626026 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 12 04:53:13 crc kubenswrapper[4796]: E1212 04:53:13.626802 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4rpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-lr2pq_openstack(f2af8481-6c64-4dc2-8028-b5a548dca4ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 04:53:13 crc kubenswrapper[4796]: E1212 04:53:13.628072 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-lr2pq" podUID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.635055 4796 scope.go:117] "RemoveContainer" containerID="b9a3324d60dda2e0f722bb53799fff8167db9b5713b65d17866fe19647f439f7" Dec 12 04:53:13 crc kubenswrapper[4796]: I1212 04:53:13.725706 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hcbxs" podUID="ff3d9df9-a1b6-4474-9614-4d7535353752" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Dec 12 04:53:14 crc kubenswrapper[4796]: I1212 04:53:14.438268 4796 scope.go:117] "RemoveContainer" containerID="220461d7957c578661bb83062d969b31d63083b73e8b1b23a142338dd137314a" Dec 12 04:53:14 crc kubenswrapper[4796]: E1212 04:53:14.541433 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-lr2pq" podUID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" Dec 12 04:53:14 crc kubenswrapper[4796]: I1212 04:53:14.782369 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb55bccb4-z8p6q"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.115414 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67764d6b9b-h7fdk"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.231574 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s2zgk"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.451977 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c45989d6c-2r8mn"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.462888 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.470849 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.470959 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.472506 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c45989d6c-2r8mn"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.513099 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.557895 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-s242x"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-httpd-config\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8cc\" (UniqueName: \"kubernetes.io/projected/c2090789-6394-4377-8d8c-4c37cd7bd857-kube-api-access-zr8cc\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579265 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-public-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579292 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-ovndb-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-internal-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-combined-ca-bundle\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.579442 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-config\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.580643 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68c4f9dc76-n9c9p"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.634595 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.680741 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-combined-ca-bundle\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.681019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-config\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.681124 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-httpd-config\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.681202 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr8cc\" (UniqueName: \"kubernetes.io/projected/c2090789-6394-4377-8d8c-4c37cd7bd857-kube-api-access-zr8cc\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.681298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-public-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.681396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-ovndb-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.681468 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-internal-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.688004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-httpd-config\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.688444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-config\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.697596 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-ovndb-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.697636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-public-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.697768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-combined-ca-bundle\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.700787 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2090789-6394-4377-8d8c-4c37cd7bd857-internal-tls-certs\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.711094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr8cc\" (UniqueName: \"kubernetes.io/projected/c2090789-6394-4377-8d8c-4c37cd7bd857-kube-api-access-zr8cc\") pod \"neutron-6c45989d6c-2r8mn\" (UID: \"c2090789-6394-4377-8d8c-4c37cd7bd857\") " pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: I1212 04:53:15.797529 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:15 crc kubenswrapper[4796]: W1212 04:53:15.837626 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c750273_c3b9_46b0_b884_422d779e73e3.slice/crio-b0a02947986385cc6fa9a7507fe8e02a9d9d3897423881b4236f1cec7a4d13c5 WatchSource:0}: Error finding container b0a02947986385cc6fa9a7507fe8e02a9d9d3897423881b4236f1cec7a4d13c5: Status 404 returned error can't find the container with id b0a02947986385cc6fa9a7507fe8e02a9d9d3897423881b4236f1cec7a4d13c5 Dec 12 04:53:15 crc kubenswrapper[4796]: W1212 04:53:15.868162 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9dd4b9b_2536_495d_bc5c_c3260fa7289a.slice/crio-887db9a6da6f04a7de5440932eea4c5eb76f1b16cbddb9c5a41123cb13804e74 WatchSource:0}: Error finding container 887db9a6da6f04a7de5440932eea4c5eb76f1b16cbddb9c5a41123cb13804e74: Status 404 returned error can't find the container with id 887db9a6da6f04a7de5440932eea4c5eb76f1b16cbddb9c5a41123cb13804e74 Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.050424 4796 scope.go:117] "RemoveContainer" containerID="30b67842505800f34f385111cb897bc915649abfdb83a7a6bdd1486a113a2c89" Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.314551 4796 scope.go:117] "RemoveContainer" containerID="c50194b2a3c997e56a9df9912a8060d2ba5dd33d577e0e56f87de3b069b4e284" Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.601873 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d70c6209-42c5-47d3-9d1a-156d5c7a6317","Type":"ContainerStarted","Data":"b8956b6c61d94b1e984282324d1c6ccf2527a2b3883936db18bd0a9bc4860cc4"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.621342 4796 scope.go:117] "RemoveContainer" containerID="2665b2177a2e60a86e6f4a6ec842868f413cdf8b2672ef27088867571605f6bf" Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.624971 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-s242x" event={"ID":"67ffc83e-cc88-4963-9302-fa6c816ce4c4","Type":"ContainerStarted","Data":"cd2ca5edb59133d7c3ecbf8741e2aad2885e0d36b0a2b0b5c8cfe2b62866cacd"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.634540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerStarted","Data":"fdcd5a151fbdb3cd4eaf05198b0895e0757b8012efa5a37a3313e4c25011c9d2"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.647325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686cc94ff9-lggnr" event={"ID":"3f5d7375-11fd-43ae-84a7-13fc0be7f11c","Type":"ContainerStarted","Data":"7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.668614 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a93c9e56-c4a9-41c8-a519-4193af0d7cb8","Type":"ContainerStarted","Data":"3a3c085190d5221a21a93e0f143f2ff0f9108e1c118d121a7bdd659d146ad8cd"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.686987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2zgk" event={"ID":"11a294f9-a8f1-47e6-a551-8a47f1751c39","Type":"ContainerStarted","Data":"679111d70a5c9f0f47f7813fafbcb079ffbb0c7a329adac2be3fffb69043dce1"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.702427 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c45989d6c-2r8mn"] Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.746546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerStarted","Data":"887db9a6da6f04a7de5440932eea4c5eb76f1b16cbddb9c5a41123cb13804e74"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.756051 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68c4f9dc76-n9c9p" event={"ID":"6c750273-c3b9-46b0-b884-422d779e73e3","Type":"ContainerStarted","Data":"fb9efb26da5936ab195fd51951ea07162000999ef7bcec6abb416a860a74b1fa"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.756365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68c4f9dc76-n9c9p" event={"ID":"6c750273-c3b9-46b0-b884-422d779e73e3","Type":"ContainerStarted","Data":"b0a02947986385cc6fa9a7507fe8e02a9d9d3897423881b4236f1cec7a4d13c5"} Dec 12 04:53:16 crc kubenswrapper[4796]: I1212 04:53:16.774790 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f5d564649-v799z" event={"ID":"74b8adc6-3f38-4bb0-92bb-3ba777872a01","Type":"ContainerStarted","Data":"a11c4626af6e915974023ce48dafd028144fd2c6d2589364b594e48654695020"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.787876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2zgk" event={"ID":"11a294f9-a8f1-47e6-a551-8a47f1751c39","Type":"ContainerStarted","Data":"8d21a072b5e63f30311044a42ffbfe3d326f135ba45e5866e09f3f4923daeb2e"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.790752 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerStarted","Data":"c2e1ad3cacace7f7a4135db5631dba25cffc7f212d2b2651269d02773d710dcb"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.792252 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d574cc9cf-zb7lp" event={"ID":"1aa5f34c-65ff-426f-9752-e88125dc10aa","Type":"ContainerStarted","Data":"629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.794202 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c45989d6c-2r8mn" event={"ID":"c2090789-6394-4377-8d8c-4c37cd7bd857","Type":"ContainerStarted","Data":"f18527826b09eb4703a3e45cf416d0c121780d8d85a481bdc0f8ac1bc3b5ed8b"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.798019 4796 generic.go:334] "Generic (PLEG): container finished" podID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerID="73f32a9f2b54e5cb1cfba4589990c682470e1743689a9aeda2a1d7b8e815874c" exitCode=0 Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.798075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-s242x" event={"ID":"67ffc83e-cc88-4963-9302-fa6c816ce4c4","Type":"ContainerDied","Data":"73f32a9f2b54e5cb1cfba4589990c682470e1743689a9aeda2a1d7b8e815874c"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.801071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerStarted","Data":"5c5603cfd2489c8a3820907f40d29a1ed5558f7227042d3e3483d44a0f023bf4"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.810601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s672f" event={"ID":"b9082485-1887-4b6d-8e1f-371825f61dfc","Type":"ContainerStarted","Data":"d7ba977ca0a7c2b1a64bfdbd032965adad4d31f4380cfd90964d7ae6121e1a8e"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.820916 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c16aa3d8-e979-4370-bda3-22d68070a7ff","Type":"ContainerStarted","Data":"cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.827879 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ft5nd" event={"ID":"32723a76-dbe0-493d-9a87-5c2f46912a71","Type":"ContainerStarted","Data":"c0ac27ddc674d4c589112df98817a51c81d670252bd4e28afe21a761f198c507"} Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.830074 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s2zgk" podStartSLOduration=16.830054198 podStartE2EDuration="16.830054198s" podCreationTimestamp="2025-12-12 04:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:17.807131753 +0000 UTC m=+1188.683148900" watchObservedRunningTime="2025-12-12 04:53:17.830054198 +0000 UTC m=+1188.706071345" Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.874831 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-s672f" podStartSLOduration=3.9666724159999998 podStartE2EDuration="40.874812175s" podCreationTimestamp="2025-12-12 04:52:37 +0000 UTC" firstStartedPulling="2025-12-12 04:52:39.028093933 +0000 UTC m=+1149.904111080" lastFinishedPulling="2025-12-12 04:53:15.936233692 +0000 UTC m=+1186.812250839" observedRunningTime="2025-12-12 04:53:17.855577875 +0000 UTC m=+1188.731595012" watchObservedRunningTime="2025-12-12 04:53:17.874812175 +0000 UTC m=+1188.750829312" Dec 12 04:53:17 crc kubenswrapper[4796]: I1212 04:53:17.888624 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ft5nd" podStartSLOduration=3.952416636 podStartE2EDuration="40.888608095s" podCreationTimestamp="2025-12-12 04:52:37 +0000 UTC" firstStartedPulling="2025-12-12 04:52:38.999228107 +0000 UTC m=+1149.875245254" lastFinishedPulling="2025-12-12 04:53:15.935419566 +0000 UTC m=+1186.811436713" observedRunningTime="2025-12-12 04:53:17.887432749 +0000 UTC m=+1188.763449896" watchObservedRunningTime="2025-12-12 04:53:17.888608095 +0000 UTC m=+1188.764625242" Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.849878 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d574cc9cf-zb7lp" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon-log" containerID="cri-o://629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16" gracePeriod=30 Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.850328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d574cc9cf-zb7lp" event={"ID":"1aa5f34c-65ff-426f-9752-e88125dc10aa","Type":"ContainerStarted","Data":"9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.850716 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d574cc9cf-zb7lp" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon" containerID="cri-o://9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226" gracePeriod=30 Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.855903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c45989d6c-2r8mn" event={"ID":"c2090789-6394-4377-8d8c-4c37cd7bd857","Type":"ContainerStarted","Data":"a7d6f20ddddf34cdce3d45ea617980b47dd4e4a3d8858454936fe12fa251b6ea"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.868690 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d70c6209-42c5-47d3-9d1a-156d5c7a6317","Type":"ContainerStarted","Data":"ef32e79269831a71285933d9c85cdf8359cd004801eabddbb14c41797e90f0be"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.874499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686cc94ff9-lggnr" event={"ID":"3f5d7375-11fd-43ae-84a7-13fc0be7f11c","Type":"ContainerStarted","Data":"c5e97ad5d577e122c519f0f2b0aa815027d78bf25fecec0251969c772a265e6f"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.874650 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-686cc94ff9-lggnr" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon-log" containerID="cri-o://7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1" gracePeriod=30 Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.874888 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-686cc94ff9-lggnr" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon" containerID="cri-o://c5e97ad5d577e122c519f0f2b0aa815027d78bf25fecec0251969c772a265e6f" gracePeriod=30 Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.899402 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d574cc9cf-zb7lp" podStartSLOduration=7.097263499 podStartE2EDuration="41.899384202s" podCreationTimestamp="2025-12-12 04:52:37 +0000 UTC" firstStartedPulling="2025-12-12 04:52:39.004146762 +0000 UTC m=+1149.880163909" lastFinishedPulling="2025-12-12 04:53:13.806267465 +0000 UTC m=+1184.682284612" observedRunningTime="2025-12-12 04:53:18.890877806 +0000 UTC m=+1189.766894973" watchObservedRunningTime="2025-12-12 04:53:18.899384202 +0000 UTC m=+1189.775401349" Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.899468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a93c9e56-c4a9-41c8-a519-4193af0d7cb8","Type":"ContainerStarted","Data":"95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.907684 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerStarted","Data":"70b9c9eddbf4a440dcf231af081331ddd22ee3f9a6479629ac84e9ef933ac6f0"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.916443 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68c4f9dc76-n9c9p" event={"ID":"6c750273-c3b9-46b0-b884-422d779e73e3","Type":"ContainerStarted","Data":"5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.918758 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.925736 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f5d564649-v799z" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon-log" containerID="cri-o://a11c4626af6e915974023ce48dafd028144fd2c6d2589364b594e48654695020" gracePeriod=30 Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.925895 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f5d564649-v799z" event={"ID":"74b8adc6-3f38-4bb0-92bb-3ba777872a01","Type":"ContainerStarted","Data":"351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d"} Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.925935 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f5d564649-v799z" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon" containerID="cri-o://351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d" gracePeriod=30 Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.945454 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-686cc94ff9-lggnr" podStartSLOduration=8.034098991 podStartE2EDuration="42.945433699s" podCreationTimestamp="2025-12-12 04:52:36 +0000 UTC" firstStartedPulling="2025-12-12 04:52:38.735689639 +0000 UTC m=+1149.611706786" lastFinishedPulling="2025-12-12 04:53:13.647024347 +0000 UTC m=+1184.523041494" observedRunningTime="2025-12-12 04:53:18.915613978 +0000 UTC m=+1189.791631125" watchObservedRunningTime="2025-12-12 04:53:18.945433699 +0000 UTC m=+1189.821450846" Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.946673 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67764d6b9b-h7fdk" podStartSLOduration=32.946665607 podStartE2EDuration="32.946665607s" podCreationTimestamp="2025-12-12 04:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:18.943850449 +0000 UTC m=+1189.819867586" watchObservedRunningTime="2025-12-12 04:53:18.946665607 +0000 UTC m=+1189.822682754" Dec 12 04:53:18 crc kubenswrapper[4796]: I1212 04:53:18.983414 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f5d564649-v799z" podStartSLOduration=9.47212424 podStartE2EDuration="38.983393433s" podCreationTimestamp="2025-12-12 04:52:40 +0000 UTC" firstStartedPulling="2025-12-12 04:52:41.680002064 +0000 UTC m=+1152.556019211" lastFinishedPulling="2025-12-12 04:53:11.191271247 +0000 UTC m=+1182.067288404" observedRunningTime="2025-12-12 04:53:18.972207044 +0000 UTC m=+1189.848224201" watchObservedRunningTime="2025-12-12 04:53:18.983393433 +0000 UTC m=+1189.859410580" Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.461765 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68c4f9dc76-n9c9p" podStartSLOduration=7.461748197 podStartE2EDuration="7.461748197s" podCreationTimestamp="2025-12-12 04:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:19.006327078 +0000 UTC m=+1189.882344235" watchObservedRunningTime="2025-12-12 04:53:19.461748197 +0000 UTC m=+1190.337765344" Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.939610 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c45989d6c-2r8mn" event={"ID":"c2090789-6394-4377-8d8c-4c37cd7bd857","Type":"ContainerStarted","Data":"027c32dcbdf1e6b892b99eff28f907180f31cee0e41bb0a27907f465ab609aed"} Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.939964 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.944960 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d70c6209-42c5-47d3-9d1a-156d5c7a6317","Type":"ContainerStarted","Data":"741fe1ca77a568f4afe310310736db014c37377d494eed0ccb414e01fe71b5f8"} Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.947628 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerStarted","Data":"7ee4b76a2712ab615b271101c7888ecca69c8d06360d3dab11046c4fb8bfb928"} Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.949999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-s242x" event={"ID":"67ffc83e-cc88-4963-9302-fa6c816ce4c4","Type":"ContainerStarted","Data":"1e6d3d06bbffcd062d9f7c892097f439939c15d7a4282406b6e127470126df33"} Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.950532 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.952626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a93c9e56-c4a9-41c8-a519-4193af0d7cb8","Type":"ContainerStarted","Data":"3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380"} Dec 12 04:53:19 crc kubenswrapper[4796]: I1212 04:53:19.966340 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c45989d6c-2r8mn" podStartSLOduration=4.96632328 podStartE2EDuration="4.96632328s" podCreationTimestamp="2025-12-12 04:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:19.965841685 +0000 UTC m=+1190.841858852" watchObservedRunningTime="2025-12-12 04:53:19.96632328 +0000 UTC m=+1190.842340427" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.053302 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.053265763 podStartE2EDuration="20.053265763s" podCreationTimestamp="2025-12-12 04:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:20.016248448 +0000 UTC m=+1190.892265615" watchObservedRunningTime="2025-12-12 04:53:20.053265763 +0000 UTC m=+1190.929282910" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.079019 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-s242x" podStartSLOduration=8.078997126 podStartE2EDuration="8.078997126s" podCreationTimestamp="2025-12-12 04:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:20.045521432 +0000 UTC m=+1190.921538579" watchObservedRunningTime="2025-12-12 04:53:20.078997126 +0000 UTC m=+1190.955014273" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.087709 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cb55bccb4-z8p6q" podStartSLOduration=34.087692527 podStartE2EDuration="34.087692527s" podCreationTimestamp="2025-12-12 04:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:20.075162106 +0000 UTC m=+1190.951179253" watchObservedRunningTime="2025-12-12 04:53:20.087692527 +0000 UTC m=+1190.963709674" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.125969 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.125945601 podStartE2EDuration="10.125945601s" podCreationTimestamp="2025-12-12 04:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:20.109118806 +0000 UTC m=+1190.985135953" watchObservedRunningTime="2025-12-12 04:53:20.125945601 +0000 UTC m=+1191.001962748" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.792402 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.792774 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.862881 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.890216 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.963063 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f5d564649-v799z" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.982092 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 04:53:20 crc kubenswrapper[4796]: I1212 04:53:20.983128 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 04:53:21 crc kubenswrapper[4796]: I1212 04:53:21.336752 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:21 crc kubenswrapper[4796]: I1212 04:53:21.336790 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:21 crc kubenswrapper[4796]: I1212 04:53:21.392741 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:21 crc kubenswrapper[4796]: I1212 04:53:21.436230 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:22 crc kubenswrapper[4796]: I1212 04:53:22.007128 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:22 crc kubenswrapper[4796]: I1212 04:53:22.008749 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:24 crc kubenswrapper[4796]: I1212 04:53:24.032356 4796 generic.go:334] "Generic (PLEG): container finished" podID="b9082485-1887-4b6d-8e1f-371825f61dfc" containerID="d7ba977ca0a7c2b1a64bfdbd032965adad4d31f4380cfd90964d7ae6121e1a8e" exitCode=0 Dec 12 04:53:24 crc kubenswrapper[4796]: I1212 04:53:24.032590 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s672f" event={"ID":"b9082485-1887-4b6d-8e1f-371825f61dfc","Type":"ContainerDied","Data":"d7ba977ca0a7c2b1a64bfdbd032965adad4d31f4380cfd90964d7ae6121e1a8e"} Dec 12 04:53:24 crc kubenswrapper[4796]: I1212 04:53:24.038370 4796 generic.go:334] "Generic (PLEG): container finished" podID="32723a76-dbe0-493d-9a87-5c2f46912a71" containerID="c0ac27ddc674d4c589112df98817a51c81d670252bd4e28afe21a761f198c507" exitCode=0 Dec 12 04:53:24 crc kubenswrapper[4796]: I1212 04:53:24.038439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ft5nd" event={"ID":"32723a76-dbe0-493d-9a87-5c2f46912a71","Type":"ContainerDied","Data":"c0ac27ddc674d4c589112df98817a51c81d670252bd4e28afe21a761f198c507"} Dec 12 04:53:24 crc kubenswrapper[4796]: I1212 04:53:24.038475 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:53:25 crc kubenswrapper[4796]: I1212 04:53:25.052271 4796 generic.go:334] "Generic (PLEG): container finished" podID="11a294f9-a8f1-47e6-a551-8a47f1751c39" containerID="8d21a072b5e63f30311044a42ffbfe3d326f135ba45e5866e09f3f4923daeb2e" exitCode=0 Dec 12 04:53:25 crc kubenswrapper[4796]: I1212 04:53:25.052387 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2zgk" event={"ID":"11a294f9-a8f1-47e6-a551-8a47f1751c39","Type":"ContainerDied","Data":"8d21a072b5e63f30311044a42ffbfe3d326f135ba45e5866e09f3f4923daeb2e"} Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.020038 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.021299 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.094672 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.095336 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.328270 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.552927 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:53:27 crc kubenswrapper[4796]: I1212 04:53:27.959416 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:28 crc kubenswrapper[4796]: I1212 04:53:28.028059 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5rwzk"] Dec 12 04:53:28 crc kubenswrapper[4796]: I1212 04:53:28.028333 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerName="dnsmasq-dns" containerID="cri-o://5262bb061517de6e98754b1e0b8d11c251dde9d14e35ed5df69c798b7b2a976e" gracePeriod=10 Dec 12 04:53:28 crc kubenswrapper[4796]: I1212 04:53:28.160041 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:28 crc kubenswrapper[4796]: I1212 04:53:28.283993 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 04:53:29 crc kubenswrapper[4796]: I1212 04:53:29.128614 4796 generic.go:334] "Generic (PLEG): container finished" podID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerID="5262bb061517de6e98754b1e0b8d11c251dde9d14e35ed5df69c798b7b2a976e" exitCode=0 Dec 12 04:53:29 crc kubenswrapper[4796]: I1212 04:53:29.128660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" event={"ID":"beb84d61-a2ac-49bd-9a28-9ffe4095afc8","Type":"ContainerDied","Data":"5262bb061517de6e98754b1e0b8d11c251dde9d14e35ed5df69c798b7b2a976e"} Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.368617 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s672f" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.400401 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ft5nd" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.408076 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.414615 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6567\" (UniqueName: \"kubernetes.io/projected/b9082485-1887-4b6d-8e1f-371825f61dfc-kube-api-access-z6567\") pod \"b9082485-1887-4b6d-8e1f-371825f61dfc\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.414690 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-combined-ca-bundle\") pod \"b9082485-1887-4b6d-8e1f-371825f61dfc\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.414782 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-db-sync-config-data\") pod \"b9082485-1887-4b6d-8e1f-371825f61dfc\" (UID: \"b9082485-1887-4b6d-8e1f-371825f61dfc\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.441384 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b9082485-1887-4b6d-8e1f-371825f61dfc" (UID: "b9082485-1887-4b6d-8e1f-371825f61dfc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.448252 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9082485-1887-4b6d-8e1f-371825f61dfc-kube-api-access-z6567" (OuterVolumeSpecName: "kube-api-access-z6567") pod "b9082485-1887-4b6d-8e1f-371825f61dfc" (UID: "b9082485-1887-4b6d-8e1f-371825f61dfc"). InnerVolumeSpecName "kube-api-access-z6567". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.501207 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9082485-1887-4b6d-8e1f-371825f61dfc" (UID: "b9082485-1887-4b6d-8e1f-371825f61dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.521432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-fernet-keys\") pod \"11a294f9-a8f1-47e6-a551-8a47f1751c39\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-config-data\") pod \"11a294f9-a8f1-47e6-a551-8a47f1751c39\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524197 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcp4t\" (UniqueName: \"kubernetes.io/projected/11a294f9-a8f1-47e6-a551-8a47f1751c39-kube-api-access-kcp4t\") pod \"11a294f9-a8f1-47e6-a551-8a47f1751c39\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524510 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cds9f\" (UniqueName: \"kubernetes.io/projected/32723a76-dbe0-493d-9a87-5c2f46912a71-kube-api-access-cds9f\") pod \"32723a76-dbe0-493d-9a87-5c2f46912a71\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524637 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-combined-ca-bundle\") pod \"11a294f9-a8f1-47e6-a551-8a47f1751c39\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-scripts\") pod \"32723a76-dbe0-493d-9a87-5c2f46912a71\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-combined-ca-bundle\") pod \"32723a76-dbe0-493d-9a87-5c2f46912a71\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.524934 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-credential-keys\") pod \"11a294f9-a8f1-47e6-a551-8a47f1751c39\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.525010 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32723a76-dbe0-493d-9a87-5c2f46912a71-logs\") pod \"32723a76-dbe0-493d-9a87-5c2f46912a71\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.525103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-scripts\") pod \"11a294f9-a8f1-47e6-a551-8a47f1751c39\" (UID: \"11a294f9-a8f1-47e6-a551-8a47f1751c39\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.525182 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data\") pod \"32723a76-dbe0-493d-9a87-5c2f46912a71\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.526038 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.526159 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6567\" (UniqueName: \"kubernetes.io/projected/b9082485-1887-4b6d-8e1f-371825f61dfc-kube-api-access-z6567\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.526238 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9082485-1887-4b6d-8e1f-371825f61dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.564203 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "11a294f9-a8f1-47e6-a551-8a47f1751c39" (UID: "11a294f9-a8f1-47e6-a551-8a47f1751c39"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.587230 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a294f9-a8f1-47e6-a551-8a47f1751c39-kube-api-access-kcp4t" (OuterVolumeSpecName: "kube-api-access-kcp4t") pod "11a294f9-a8f1-47e6-a551-8a47f1751c39" (UID: "11a294f9-a8f1-47e6-a551-8a47f1751c39"). InnerVolumeSpecName "kube-api-access-kcp4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.587473 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "11a294f9-a8f1-47e6-a551-8a47f1751c39" (UID: "11a294f9-a8f1-47e6-a551-8a47f1751c39"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.606506 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32723a76-dbe0-493d-9a87-5c2f46912a71-logs" (OuterVolumeSpecName: "logs") pod "32723a76-dbe0-493d-9a87-5c2f46912a71" (UID: "32723a76-dbe0-493d-9a87-5c2f46912a71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.622421 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32723a76-dbe0-493d-9a87-5c2f46912a71-kube-api-access-cds9f" (OuterVolumeSpecName: "kube-api-access-cds9f") pod "32723a76-dbe0-493d-9a87-5c2f46912a71" (UID: "32723a76-dbe0-493d-9a87-5c2f46912a71"). InnerVolumeSpecName "kube-api-access-cds9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.622863 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-scripts" (OuterVolumeSpecName: "scripts") pod "11a294f9-a8f1-47e6-a551-8a47f1751c39" (UID: "11a294f9-a8f1-47e6-a551-8a47f1751c39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.628712 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.628740 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcp4t\" (UniqueName: \"kubernetes.io/projected/11a294f9-a8f1-47e6-a551-8a47f1751c39-kube-api-access-kcp4t\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.628752 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cds9f\" (UniqueName: \"kubernetes.io/projected/32723a76-dbe0-493d-9a87-5c2f46912a71-kube-api-access-cds9f\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.628763 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.628773 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32723a76-dbe0-493d-9a87-5c2f46912a71-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.628782 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.633606 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-scripts" (OuterVolumeSpecName: "scripts") pod "32723a76-dbe0-493d-9a87-5c2f46912a71" (UID: "32723a76-dbe0-493d-9a87-5c2f46912a71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.649633 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-config-data" (OuterVolumeSpecName: "config-data") pod "11a294f9-a8f1-47e6-a551-8a47f1751c39" (UID: "11a294f9-a8f1-47e6-a551-8a47f1751c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.652964 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11a294f9-a8f1-47e6-a551-8a47f1751c39" (UID: "11a294f9-a8f1-47e6-a551-8a47f1751c39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.659143 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32723a76-dbe0-493d-9a87-5c2f46912a71" (UID: "32723a76-dbe0-493d-9a87-5c2f46912a71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.679813 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730389 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data" (OuterVolumeSpecName: "config-data") pod "32723a76-dbe0-493d-9a87-5c2f46912a71" (UID: "32723a76-dbe0-493d-9a87-5c2f46912a71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730515 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-config\") pod \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730563 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-svc\") pod \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730629 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data\") pod \"32723a76-dbe0-493d-9a87-5c2f46912a71\" (UID: \"32723a76-dbe0-493d-9a87-5c2f46912a71\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730660 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-swift-storage-0\") pod \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730696 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-sb\") pod \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730713 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dzhs\" (UniqueName: \"kubernetes.io/projected/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-kube-api-access-8dzhs\") pod \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.730730 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-nb\") pod \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\" (UID: \"beb84d61-a2ac-49bd-9a28-9ffe4095afc8\") " Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.731859 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.731877 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.731888 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.731915 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a294f9-a8f1-47e6-a551-8a47f1751c39-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: W1212 04:53:31.745492 4796 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/32723a76-dbe0-493d-9a87-5c2f46912a71/volumes/kubernetes.io~secret/config-data Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.745514 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data" (OuterVolumeSpecName: "config-data") pod "32723a76-dbe0-493d-9a87-5c2f46912a71" (UID: "32723a76-dbe0-493d-9a87-5c2f46912a71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.747869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-kube-api-access-8dzhs" (OuterVolumeSpecName: "kube-api-access-8dzhs") pod "beb84d61-a2ac-49bd-9a28-9ffe4095afc8" (UID: "beb84d61-a2ac-49bd-9a28-9ffe4095afc8"). InnerVolumeSpecName "kube-api-access-8dzhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.833423 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32723a76-dbe0-493d-9a87-5c2f46912a71-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.833451 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dzhs\" (UniqueName: \"kubernetes.io/projected/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-kube-api-access-8dzhs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.881337 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "beb84d61-a2ac-49bd-9a28-9ffe4095afc8" (UID: "beb84d61-a2ac-49bd-9a28-9ffe4095afc8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.926217 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "beb84d61-a2ac-49bd-9a28-9ffe4095afc8" (UID: "beb84d61-a2ac-49bd-9a28-9ffe4095afc8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.934741 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.934776 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.963410 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "beb84d61-a2ac-49bd-9a28-9ffe4095afc8" (UID: "beb84d61-a2ac-49bd-9a28-9ffe4095afc8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.964333 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-config" (OuterVolumeSpecName: "config") pod "beb84d61-a2ac-49bd-9a28-9ffe4095afc8" (UID: "beb84d61-a2ac-49bd-9a28-9ffe4095afc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:31 crc kubenswrapper[4796]: I1212 04:53:31.976356 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "beb84d61-a2ac-49bd-9a28-9ffe4095afc8" (UID: "beb84d61-a2ac-49bd-9a28-9ffe4095afc8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.019509 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.036114 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.036150 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.036166 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/beb84d61-a2ac-49bd-9a28-9ffe4095afc8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.251349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c16aa3d8-e979-4370-bda3-22d68070a7ff","Type":"ContainerStarted","Data":"362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe"} Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.254551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2zgk" event={"ID":"11a294f9-a8f1-47e6-a551-8a47f1751c39","Type":"ContainerDied","Data":"679111d70a5c9f0f47f7813fafbcb079ffbb0c7a329adac2be3fffb69043dce1"} Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.254722 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679111d70a5c9f0f47f7813fafbcb079ffbb0c7a329adac2be3fffb69043dce1" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.254860 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2zgk" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.279010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" event={"ID":"beb84d61-a2ac-49bd-9a28-9ffe4095afc8","Type":"ContainerDied","Data":"4798a5d18cc16b2b4e68de31f40d03a4b0531d03e64412671d7098be41f3d76e"} Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.279251 4796 scope.go:117] "RemoveContainer" containerID="5262bb061517de6e98754b1e0b8d11c251dde9d14e35ed5df69c798b7b2a976e" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.279548 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5rwzk" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.289459 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ft5nd" event={"ID":"32723a76-dbe0-493d-9a87-5c2f46912a71","Type":"ContainerDied","Data":"4bcf04d5d27a5219d37a00b8c66a7cc83ccd81c84882393378737c794bf59960"} Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.289494 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bcf04d5d27a5219d37a00b8c66a7cc83ccd81c84882393378737c794bf59960" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.289494 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ft5nd" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.337693 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s672f" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.344315 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5rwzk"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.344451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s672f" event={"ID":"b9082485-1887-4b6d-8e1f-371825f61dfc","Type":"ContainerDied","Data":"4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9"} Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.344528 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.391566 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5rwzk"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.408505 4796 scope.go:117] "RemoveContainer" containerID="64f11ff8b3692472b577a6d5efcc3933c68f0b50495436a802445c0734039c52" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.646358 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-56fbf7b8ff-h4cs5"] Dec 12 04:53:32 crc kubenswrapper[4796]: E1212 04:53:32.646789 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerName="init" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.646802 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerName="init" Dec 12 04:53:32 crc kubenswrapper[4796]: E1212 04:53:32.646817 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerName="dnsmasq-dns" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.646822 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerName="dnsmasq-dns" Dec 12 04:53:32 crc kubenswrapper[4796]: E1212 04:53:32.646839 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32723a76-dbe0-493d-9a87-5c2f46912a71" containerName="placement-db-sync" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.646846 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="32723a76-dbe0-493d-9a87-5c2f46912a71" containerName="placement-db-sync" Dec 12 04:53:32 crc kubenswrapper[4796]: E1212 04:53:32.646858 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a294f9-a8f1-47e6-a551-8a47f1751c39" containerName="keystone-bootstrap" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.646864 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a294f9-a8f1-47e6-a551-8a47f1751c39" containerName="keystone-bootstrap" Dec 12 04:53:32 crc kubenswrapper[4796]: E1212 04:53:32.646876 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9082485-1887-4b6d-8e1f-371825f61dfc" containerName="barbican-db-sync" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.646882 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9082485-1887-4b6d-8e1f-371825f61dfc" containerName="barbican-db-sync" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.647054 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9082485-1887-4b6d-8e1f-371825f61dfc" containerName="barbican-db-sync" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.647069 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a294f9-a8f1-47e6-a551-8a47f1751c39" containerName="keystone-bootstrap" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.647078 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="32723a76-dbe0-493d-9a87-5c2f46912a71" containerName="placement-db-sync" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.647091 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" containerName="dnsmasq-dns" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.648624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.664731 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.664990 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v8m2j" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.677092 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8586b565b6-wdsdw"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.679029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.686744 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.687037 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.712349 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-56fbf7b8ff-h4cs5"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.734512 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8586b565b6-wdsdw"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.780192 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f8ffdd64b-7gmkf"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.788493 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.790862 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92265cff-6059-4736-a5cb-8935972c0bb8-logs\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.790900 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-config-data-custom\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.790923 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-combined-ca-bundle\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.790948 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-config-data\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.790984 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cft7\" (UniqueName: \"kubernetes.io/projected/92265cff-6059-4736-a5cb-8935972c0bb8-kube-api-access-9cft7\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.791025 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-config-data\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.791062 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-combined-ca-bundle\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.791078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-config-data-custom\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.791096 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pkcz\" (UniqueName: \"kubernetes.io/projected/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-kube-api-access-5pkcz\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.791119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-logs\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.802479 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dxhm5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.802662 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.802843 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.802984 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.803176 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.859961 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f8ffdd64b-7gmkf"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.880337 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-86bc7ff485-lzxvk"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.881600 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.889397 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.891488 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.891680 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7r2qv" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.891845 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.891981 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.894806 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.895889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-config-data\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.895946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-combined-ca-bundle\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.895968 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/724e3890-930d-4492-8599-460add96a852-logs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.895985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-config-data-custom\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-internal-tls-certs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pkcz\" (UniqueName: \"kubernetes.io/projected/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-kube-api-access-5pkcz\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-combined-ca-bundle\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-logs\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-scripts\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-public-tls-certs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92265cff-6059-4736-a5cb-8935972c0bb8-logs\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896169 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79xv\" (UniqueName: \"kubernetes.io/projected/724e3890-930d-4492-8599-460add96a852-kube-api-access-p79xv\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-config-data-custom\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-combined-ca-bundle\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896242 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-config-data\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-config-data\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cft7\" (UniqueName: \"kubernetes.io/projected/92265cff-6059-4736-a5cb-8935972c0bb8-kube-api-access-9cft7\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.896886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-logs\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.897201 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92265cff-6059-4736-a5cb-8935972c0bb8-logs\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.915100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-combined-ca-bundle\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.917845 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-config-data\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.919915 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-config-data-custom\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.935978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92265cff-6059-4736-a5cb-8935972c0bb8-config-data-custom\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.939724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-86bc7ff485-lzxvk"] Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.948905 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cft7\" (UniqueName: \"kubernetes.io/projected/92265cff-6059-4736-a5cb-8935972c0bb8-kube-api-access-9cft7\") pod \"barbican-worker-56fbf7b8ff-h4cs5\" (UID: \"92265cff-6059-4736-a5cb-8935972c0bb8\") " pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.954505 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-combined-ca-bundle\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.955145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pkcz\" (UniqueName: \"kubernetes.io/projected/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-kube-api-access-5pkcz\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.968909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d1a0aba-21c7-4f4f-95f8-41802b2d23c3-config-data\") pod \"barbican-keystone-listener-8586b565b6-wdsdw\" (UID: \"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3\") " pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.971485 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.971540 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:53:32 crc kubenswrapper[4796]: I1212 04:53:32.972925 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.009230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-config-data\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.009771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcp7j\" (UniqueName: \"kubernetes.io/projected/b621dfe8-e202-40a6-8544-9195e0d7dc80-kube-api-access-rcp7j\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.009873 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-internal-tls-certs\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.009957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-config-data\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-scripts\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010114 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-combined-ca-bundle\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-credential-keys\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/724e3890-930d-4492-8599-460add96a852-logs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-internal-tls-certs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-combined-ca-bundle\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-scripts\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-public-tls-certs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79xv\" (UniqueName: \"kubernetes.io/projected/724e3890-930d-4492-8599-460add96a852-kube-api-access-p79xv\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010797 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-fernet-keys\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.010878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-public-tls-certs\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.014221 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/724e3890-930d-4492-8599-460add96a852-logs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.037339 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zcggl"] Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.047701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-public-tls-certs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.048362 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-config-data\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.048535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.052871 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.058809 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zcggl"] Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.060851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79xv\" (UniqueName: \"kubernetes.io/projected/724e3890-930d-4492-8599-460add96a852-kube-api-access-p79xv\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.080823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-combined-ca-bundle\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.094970 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-internal-tls-certs\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.100722 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/724e3890-930d-4492-8599-460add96a852-scripts\") pod \"placement-5f8ffdd64b-7gmkf\" (UID: \"724e3890-930d-4492-8599-460add96a852\") " pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcp7j\" (UniqueName: \"kubernetes.io/projected/b621dfe8-e202-40a6-8544-9195e0d7dc80-kube-api-access-rcp7j\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112218 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-internal-tls-certs\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112414 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112518 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-config-data\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-scripts\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112769 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-config\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112842 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-combined-ca-bundle\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.112914 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-credential-keys\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.113011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck7fb\" (UniqueName: \"kubernetes.io/projected/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-kube-api-access-ck7fb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.113131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.113245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-fernet-keys\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.113328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-public-tls-certs\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.121881 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d559ddcdd-7chvv"] Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.133341 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.139094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.140135 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.140448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-fernet-keys\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.145309 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-config-data\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.146099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-scripts\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.147340 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-combined-ca-bundle\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.168598 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d559ddcdd-7chvv"] Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.174625 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.174795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-internal-tls-certs\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.175014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-public-tls-certs\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.178827 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcp7j\" (UniqueName: \"kubernetes.io/projected/b621dfe8-e202-40a6-8544-9195e0d7dc80-kube-api-access-rcp7j\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.188021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b621dfe8-e202-40a6-8544-9195e0d7dc80-credential-keys\") pod \"keystone-86bc7ff485-lzxvk\" (UID: \"b621dfe8-e202-40a6-8544-9195e0d7dc80\") " pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220673 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-combined-ca-bundle\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220796 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data-custom\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220840 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856e3496-fbc0-4a54-ad91-6f49e3777130-logs\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220897 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.220997 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-config\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.221042 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcfjd\" (UniqueName: \"kubernetes.io/projected/856e3496-fbc0-4a54-ad91-6f49e3777130-kube-api-access-lcfjd\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.221075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck7fb\" (UniqueName: \"kubernetes.io/projected/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-kube-api-access-ck7fb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.226237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.226577 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.227270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.227540 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-config\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.229023 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.254380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck7fb\" (UniqueName: \"kubernetes.io/projected/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-kube-api-access-ck7fb\") pod \"dnsmasq-dns-848cf88cfc-zcggl\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.329510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data-custom\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.329580 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.329606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856e3496-fbc0-4a54-ad91-6f49e3777130-logs\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.329698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcfjd\" (UniqueName: \"kubernetes.io/projected/856e3496-fbc0-4a54-ad91-6f49e3777130-kube-api-access-lcfjd\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.329763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-combined-ca-bundle\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.330704 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856e3496-fbc0-4a54-ad91-6f49e3777130-logs\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.352192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.359401 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data-custom\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.424948 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.441162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.441496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-combined-ca-bundle\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.464639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcfjd\" (UniqueName: \"kubernetes.io/projected/856e3496-fbc0-4a54-ad91-6f49e3777130-kube-api-access-lcfjd\") pod \"barbican-api-6d559ddcdd-7chvv\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.574517 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb84d61-a2ac-49bd-9a28-9ffe4095afc8" path="/var/lib/kubelet/pods/beb84d61-a2ac-49bd-9a28-9ffe4095afc8/volumes" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.575269 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lr2pq" event={"ID":"f2af8481-6c64-4dc2-8028-b5a548dca4ff","Type":"ContainerStarted","Data":"46089d54e79ab5fabe27d181459baa63ef898a88d29bcf23b396d89ce5eedd2f"} Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.628184 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.671735 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lr2pq" podStartSLOduration=4.871959795 podStartE2EDuration="57.671710582s" podCreationTimestamp="2025-12-12 04:52:36 +0000 UTC" firstStartedPulling="2025-12-12 04:52:38.694915711 +0000 UTC m=+1149.570932858" lastFinishedPulling="2025-12-12 04:53:31.494666498 +0000 UTC m=+1202.370683645" observedRunningTime="2025-12-12 04:53:33.629587619 +0000 UTC m=+1204.505604776" watchObservedRunningTime="2025-12-12 04:53:33.671710582 +0000 UTC m=+1204.547727739" Dec 12 04:53:33 crc kubenswrapper[4796]: I1212 04:53:33.929306 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8586b565b6-wdsdw"] Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.118964 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-56fbf7b8ff-h4cs5"] Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.244057 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f8ffdd64b-7gmkf"] Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.377949 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d559ddcdd-7chvv"] Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.394854 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zcggl"] Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.428566 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-86bc7ff485-lzxvk"] Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.615814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" event={"ID":"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa","Type":"ContainerStarted","Data":"dd3ee1a745c9ce81887c8d89ea509113b5aed481d1818a4bc6821f2a0eca5018"} Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.626262 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86bc7ff485-lzxvk" event={"ID":"b621dfe8-e202-40a6-8544-9195e0d7dc80","Type":"ContainerStarted","Data":"38fa8f392b3a016dafd331dbd49039745784a226aa8ab10c8b4f6aa104aa9cf6"} Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.638575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" event={"ID":"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3","Type":"ContainerStarted","Data":"badc3aac1c7f4f0efcdba78091aefec60bd50569b48aeabc58048b5098066155"} Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.639885 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f8ffdd64b-7gmkf" event={"ID":"724e3890-930d-4492-8599-460add96a852","Type":"ContainerStarted","Data":"462b0b97d4cb8df647de25d290967ea1e2f4ae49816b01c13cf16a3625bc31db"} Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.645820 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d559ddcdd-7chvv" event={"ID":"856e3496-fbc0-4a54-ad91-6f49e3777130","Type":"ContainerStarted","Data":"a7a169a26e51abfe3725abd64b675fd5ae414b74b757b6d5130987b24aff2845"} Dec 12 04:53:34 crc kubenswrapper[4796]: I1212 04:53:34.648805 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" event={"ID":"92265cff-6059-4736-a5cb-8935972c0bb8","Type":"ContainerStarted","Data":"4995a0ec409613105bcec9698959425747529150ec5e5e778f71fac38891db50"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.664923 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerID="96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6" exitCode=0 Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.665215 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" event={"ID":"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa","Type":"ContainerDied","Data":"96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.707340 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86bc7ff485-lzxvk" event={"ID":"b621dfe8-e202-40a6-8544-9195e0d7dc80","Type":"ContainerStarted","Data":"17cb4569b54b25088a4c75b4c09f8e3433a4a377dc766751721d2f4bc1de5ec4"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.708047 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.736193 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f8ffdd64b-7gmkf" event={"ID":"724e3890-930d-4492-8599-460add96a852","Type":"ContainerStarted","Data":"34dec330c320aa0c7b3442a4e08f4be97fdbf70643ab78df5c7c62ec2f9b4d75"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.736453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f8ffdd64b-7gmkf" event={"ID":"724e3890-930d-4492-8599-460add96a852","Type":"ContainerStarted","Data":"c92b0f1857e693a022a72c0f7f8ce9106ffa1b93bc4bc8a0e7da7fd5592b43f0"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.737343 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.737473 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.752468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d559ddcdd-7chvv" event={"ID":"856e3496-fbc0-4a54-ad91-6f49e3777130","Type":"ContainerStarted","Data":"a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.752669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d559ddcdd-7chvv" event={"ID":"856e3496-fbc0-4a54-ad91-6f49e3777130","Type":"ContainerStarted","Data":"fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41"} Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.753531 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.753649 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.783596 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f8ffdd64b-7gmkf" podStartSLOduration=3.783577753 podStartE2EDuration="3.783577753s" podCreationTimestamp="2025-12-12 04:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:35.773812299 +0000 UTC m=+1206.649829446" watchObservedRunningTime="2025-12-12 04:53:35.783577753 +0000 UTC m=+1206.659594890" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.783714 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-86bc7ff485-lzxvk" podStartSLOduration=3.783711277 podStartE2EDuration="3.783711277s" podCreationTimestamp="2025-12-12 04:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:35.740788308 +0000 UTC m=+1206.616805455" watchObservedRunningTime="2025-12-12 04:53:35.783711277 +0000 UTC m=+1206.659728424" Dec 12 04:53:35 crc kubenswrapper[4796]: I1212 04:53:35.841440 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d559ddcdd-7chvv" podStartSLOduration=3.8414129470000002 podStartE2EDuration="3.841412947s" podCreationTimestamp="2025-12-12 04:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:35.791664255 +0000 UTC m=+1206.667681402" watchObservedRunningTime="2025-12-12 04:53:35.841412947 +0000 UTC m=+1206.717430094" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.652105 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77c7c5bcf6-phtcl"] Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.654262 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.656884 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.657238 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.703934 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c7c5bcf6-phtcl"] Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.737660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-public-tls-certs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.737732 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-config-data\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.737757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9k2t\" (UniqueName: \"kubernetes.io/projected/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-kube-api-access-z9k2t\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.737859 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-combined-ca-bundle\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.737987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-config-data-custom\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.738174 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-internal-tls-certs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.738349 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-logs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.790904 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" event={"ID":"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa","Type":"ContainerStarted","Data":"37293813af3e9850dbbe2da03c9f21305146e78b0576a89c6027723132c9cd8d"} Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.791152 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.821416 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" podStartSLOduration=4.821395583 podStartE2EDuration="4.821395583s" podCreationTimestamp="2025-12-12 04:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:36.814707595 +0000 UTC m=+1207.690724732" watchObservedRunningTime="2025-12-12 04:53:36.821395583 +0000 UTC m=+1207.697412730" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.839763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-config-data\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.839840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9k2t\" (UniqueName: \"kubernetes.io/projected/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-kube-api-access-z9k2t\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.839866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-combined-ca-bundle\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.839887 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-config-data-custom\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.839939 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-internal-tls-certs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.839987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-logs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.840038 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-public-tls-certs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.843453 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-logs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.858568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-config-data\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.860635 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-combined-ca-bundle\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.860980 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-config-data-custom\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.868042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-internal-tls-certs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.874795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9k2t\" (UniqueName: \"kubernetes.io/projected/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-kube-api-access-z9k2t\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.885776 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5fd6e6-e8f6-46da-81fa-5ae035fbc255-public-tls-certs\") pod \"barbican-api-77c7c5bcf6-phtcl\" (UID: \"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255\") " pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:36 crc kubenswrapper[4796]: I1212 04:53:36.979532 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:37 crc kubenswrapper[4796]: I1212 04:53:37.025183 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:53:37 crc kubenswrapper[4796]: I1212 04:53:37.136423 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:53:37 crc kubenswrapper[4796]: I1212 04:53:37.734858 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c7c5bcf6-phtcl"] Dec 12 04:53:38 crc kubenswrapper[4796]: I1212 04:53:38.813198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c7c5bcf6-phtcl" event={"ID":"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255","Type":"ContainerStarted","Data":"62e700ea6e63eb301501640e1eed05771a1712eb0a28c198a333cc36a8efbedf"} Dec 12 04:53:40 crc kubenswrapper[4796]: I1212 04:53:40.924499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" event={"ID":"92265cff-6059-4736-a5cb-8935972c0bb8","Type":"ContainerStarted","Data":"d39046261a292555b8eb2fc9db7033a8a0d82c97f8e4e05679746682c03819d2"} Dec 12 04:53:40 crc kubenswrapper[4796]: I1212 04:53:40.945848 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c7c5bcf6-phtcl" event={"ID":"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255","Type":"ContainerStarted","Data":"b564c6fdd28947f5f9a7e1ecaca6e085bcbdacdef0823cde77114b6c29755f3c"} Dec 12 04:53:40 crc kubenswrapper[4796]: I1212 04:53:40.966330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" event={"ID":"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3","Type":"ContainerStarted","Data":"1ac8342f8fb7a80240810f67efa6fd3427f4a0b1b72731a0af8302d162d4200e"} Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.080462 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" event={"ID":"1d1a0aba-21c7-4f4f-95f8-41802b2d23c3","Type":"ContainerStarted","Data":"1e76f22fc99e8af1b751474b25975d2d4196e605e81175b073f4fa9ed5d20c22"} Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.108381 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" event={"ID":"92265cff-6059-4736-a5cb-8935972c0bb8","Type":"ContainerStarted","Data":"84f663ab1950214ba7e1744ca9718e67383a6d68bd004ba1dff6259065fb0232"} Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.132529 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c7c5bcf6-phtcl" event={"ID":"1c5fd6e6-e8f6-46da-81fa-5ae035fbc255","Type":"ContainerStarted","Data":"cd3285e4cb4891a6d67becf71af18bb84e1cdf2705921df4943fc24cdcc739f4"} Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.133071 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.133205 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.133479 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8586b565b6-wdsdw" podStartSLOduration=4.067567404 podStartE2EDuration="10.133467382s" podCreationTimestamp="2025-12-12 04:53:32 +0000 UTC" firstStartedPulling="2025-12-12 04:53:33.968074589 +0000 UTC m=+1204.844091736" lastFinishedPulling="2025-12-12 04:53:40.033974577 +0000 UTC m=+1210.909991714" observedRunningTime="2025-12-12 04:53:42.107578914 +0000 UTC m=+1212.983596061" watchObservedRunningTime="2025-12-12 04:53:42.133467382 +0000 UTC m=+1213.009484529" Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.165010 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-56fbf7b8ff-h4cs5" podStartSLOduration=4.181135328 podStartE2EDuration="10.164988346s" podCreationTimestamp="2025-12-12 04:53:32 +0000 UTC" firstStartedPulling="2025-12-12 04:53:34.136610347 +0000 UTC m=+1205.012627494" lastFinishedPulling="2025-12-12 04:53:40.120463365 +0000 UTC m=+1210.996480512" observedRunningTime="2025-12-12 04:53:42.147674275 +0000 UTC m=+1213.023691422" watchObservedRunningTime="2025-12-12 04:53:42.164988346 +0000 UTC m=+1213.041005493" Dec 12 04:53:42 crc kubenswrapper[4796]: I1212 04:53:42.179192 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77c7c5bcf6-phtcl" podStartSLOduration=6.179172688 podStartE2EDuration="6.179172688s" podCreationTimestamp="2025-12-12 04:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:42.17859792 +0000 UTC m=+1213.054615067" watchObservedRunningTime="2025-12-12 04:53:42.179172688 +0000 UTC m=+1213.055189835" Dec 12 04:53:43 crc kubenswrapper[4796]: I1212 04:53:43.136774 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:53:43 crc kubenswrapper[4796]: I1212 04:53:43.465646 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:53:43 crc kubenswrapper[4796]: I1212 04:53:43.545391 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-s242x"] Dec 12 04:53:43 crc kubenswrapper[4796]: I1212 04:53:43.545629 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-s242x" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="dnsmasq-dns" containerID="cri-o://1e6d3d06bbffcd062d9f7c892097f439939c15d7a4282406b6e127470126df33" gracePeriod=10 Dec 12 04:53:44 crc kubenswrapper[4796]: I1212 04:53:44.167733 4796 generic.go:334] "Generic (PLEG): container finished" podID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerID="1e6d3d06bbffcd062d9f7c892097f439939c15d7a4282406b6e127470126df33" exitCode=0 Dec 12 04:53:44 crc kubenswrapper[4796]: I1212 04:53:44.167775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-s242x" event={"ID":"67ffc83e-cc88-4963-9302-fa6c816ce4c4","Type":"ContainerDied","Data":"1e6d3d06bbffcd062d9f7c892097f439939c15d7a4282406b6e127470126df33"} Dec 12 04:53:44 crc kubenswrapper[4796]: I1212 04:53:44.674515 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:53:45 crc kubenswrapper[4796]: I1212 04:53:45.828780 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c45989d6c-2r8mn" Dec 12 04:53:45 crc kubenswrapper[4796]: I1212 04:53:45.911890 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68c4f9dc76-n9c9p"] Dec 12 04:53:45 crc kubenswrapper[4796]: I1212 04:53:45.912165 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68c4f9dc76-n9c9p" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-api" containerID="cri-o://fb9efb26da5936ab195fd51951ea07162000999ef7bcec6abb416a860a74b1fa" gracePeriod=30 Dec 12 04:53:45 crc kubenswrapper[4796]: I1212 04:53:45.912736 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68c4f9dc76-n9c9p" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-httpd" containerID="cri-o://5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17" gracePeriod=30 Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.019203 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.094843 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.209692 4796 generic.go:334] "Generic (PLEG): container finished" podID="6c750273-c3b9-46b0-b884-422d779e73e3" containerID="5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17" exitCode=0 Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.209744 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68c4f9dc76-n9c9p" event={"ID":"6c750273-c3b9-46b0-b884-422d779e73e3","Type":"ContainerDied","Data":"5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17"} Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.211840 4796 generic.go:334] "Generic (PLEG): container finished" podID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" containerID="46089d54e79ab5fabe27d181459baa63ef898a88d29bcf23b396d89ce5eedd2f" exitCode=0 Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.211870 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lr2pq" event={"ID":"f2af8481-6c64-4dc2-8028-b5a548dca4ff","Type":"ContainerDied","Data":"46089d54e79ab5fabe27d181459baa63ef898a88d29bcf23b396d89ce5eedd2f"} Dec 12 04:53:47 crc kubenswrapper[4796]: I1212 04:53:47.957643 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-s242x" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 12 04:53:48 crc kubenswrapper[4796]: I1212 04:53:48.097657 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:48 crc kubenswrapper[4796]: I1212 04:53:48.678446 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:53:49 crc kubenswrapper[4796]: W1212 04:53:49.012591 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c7dfbe4_b481_4af9_8f9a_df84cb8a99fa.slice/crio-conmon-96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c7dfbe4_b481_4af9_8f9a_df84cb8a99fa.slice/crio-conmon-96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6.scope: no such file or directory Dec 12 04:53:49 crc kubenswrapper[4796]: W1212 04:53:49.012996 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c7dfbe4_b481_4af9_8f9a_df84cb8a99fa.slice/crio-96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c7dfbe4_b481_4af9_8f9a_df84cb8a99fa.slice/crio-96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6.scope: no such file or directory Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.268000 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerID="c5e97ad5d577e122c519f0f2b0aa815027d78bf25fecec0251969c772a265e6f" exitCode=137 Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.268045 4796 generic.go:334] "Generic (PLEG): container finished" podID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerID="7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1" exitCode=137 Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.268144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686cc94ff9-lggnr" event={"ID":"3f5d7375-11fd-43ae-84a7-13fc0be7f11c","Type":"ContainerDied","Data":"c5e97ad5d577e122c519f0f2b0aa815027d78bf25fecec0251969c772a265e6f"} Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.268178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686cc94ff9-lggnr" event={"ID":"3f5d7375-11fd-43ae-84a7-13fc0be7f11c","Type":"ContainerDied","Data":"7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1"} Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.276549 4796 generic.go:334] "Generic (PLEG): container finished" podID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerID="351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d" exitCode=137 Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.276578 4796 generic.go:334] "Generic (PLEG): container finished" podID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerID="a11c4626af6e915974023ce48dafd028144fd2c6d2589364b594e48654695020" exitCode=137 Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.276618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f5d564649-v799z" event={"ID":"74b8adc6-3f38-4bb0-92bb-3ba777872a01","Type":"ContainerDied","Data":"351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d"} Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.276642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f5d564649-v799z" event={"ID":"74b8adc6-3f38-4bb0-92bb-3ba777872a01","Type":"ContainerDied","Data":"a11c4626af6e915974023ce48dafd028144fd2c6d2589364b594e48654695020"} Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.302046 4796 generic.go:334] "Generic (PLEG): container finished" podID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerID="9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226" exitCode=137 Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.302081 4796 generic.go:334] "Generic (PLEG): container finished" podID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerID="629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16" exitCode=137 Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.302102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d574cc9cf-zb7lp" event={"ID":"1aa5f34c-65ff-426f-9752-e88125dc10aa","Type":"ContainerDied","Data":"9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226"} Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.302126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d574cc9cf-zb7lp" event={"ID":"1aa5f34c-65ff-426f-9752-e88125dc10aa","Type":"ContainerDied","Data":"629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16"} Dec 12 04:53:49 crc kubenswrapper[4796]: E1212 04:53:49.362524 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9082485_1887_4b6d_8e1f_371825f61dfc.slice/crio-4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b8adc6_3f38_4bb0_92bb_3ba777872a01.slice/crio-351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67ffc83e_cc88_4963_9302_fa6c816ce4c4.slice/crio-conmon-1e6d3d06bbffcd062d9f7c892097f439939c15d7a4282406b6e127470126df33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2af8481_6c64_4dc2_8028_b5a548dca4ff.slice/crio-46089d54e79ab5fabe27d181459baa63ef898a88d29bcf23b396d89ce5eedd2f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c750273_c3b9_46b0_b884_422d779e73e3.slice/crio-5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa5f34c_65ff_426f_9752_e88125dc10aa.slice/crio-9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa5f34c_65ff_426f_9752_e88125dc10aa.slice/crio-conmon-9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5d7375_11fd_43ae_84a7_13fc0be7f11c.slice/crio-conmon-c5e97ad5d577e122c519f0f2b0aa815027d78bf25fecec0251969c772a265e6f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa5f34c_65ff_426f_9752_e88125dc10aa.slice/crio-conmon-629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c750273_c3b9_46b0_b884_422d779e73e3.slice/crio-conmon-5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5d7375_11fd_43ae_84a7_13fc0be7f11c.slice/crio-conmon-7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa5f34c_65ff_426f_9752_e88125dc10aa.slice/crio-629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b8adc6_3f38_4bb0_92bb_3ba777872a01.slice/crio-conmon-351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2af8481_6c64_4dc2_8028_b5a548dca4ff.slice/crio-conmon-46089d54e79ab5fabe27d181459baa63ef898a88d29bcf23b396d89ce5eedd2f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5d7375_11fd_43ae_84a7_13fc0be7f11c.slice/crio-7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b8adc6_3f38_4bb0_92bb_3ba777872a01.slice/crio-conmon-a11c4626af6e915974023ce48dafd028144fd2c6d2589364b594e48654695020.scope\": RecentStats: unable to find data in memory cache]" Dec 12 04:53:49 crc kubenswrapper[4796]: I1212 04:53:49.875429 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.082064 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77c7c5bcf6-phtcl" Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.142832 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d559ddcdd-7chvv"] Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.143091 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" containerID="cri-o://fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41" gracePeriod=30 Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.143156 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api" containerID="cri-o://a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d" gracePeriod=30 Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.152969 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.166519 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.166525 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": EOF" Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.425688 4796 generic.go:334] "Generic (PLEG): container finished" podID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerID="fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41" exitCode=143 Dec 12 04:53:50 crc kubenswrapper[4796]: I1212 04:53:50.425758 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d559ddcdd-7chvv" event={"ID":"856e3496-fbc0-4a54-ad91-6f49e3777130","Type":"ContainerDied","Data":"fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41"} Dec 12 04:53:52 crc kubenswrapper[4796]: I1212 04:53:52.957135 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-s242x" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Dec 12 04:53:54 crc kubenswrapper[4796]: E1212 04:53:54.524050 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 12 04:53:54 crc kubenswrapper[4796]: E1212 04:53:54.524756 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ktwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c16aa3d8-e979-4370-bda3-22d68070a7ff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 12 04:53:54 crc kubenswrapper[4796]: E1212 04:53:54.527609 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.704322 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.732298 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:43100->10.217.0.158:9311: read: connection reset by peer" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.732779 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:43078->10.217.0.158:9311: read: connection reset by peer" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.732907 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:43086->10.217.0.158:9311: read: connection reset by peer" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.733445 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.733640 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.734090 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d559ddcdd-7chvv" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.853108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4rpl\" (UniqueName: \"kubernetes.io/projected/f2af8481-6c64-4dc2-8028-b5a548dca4ff-kube-api-access-w4rpl\") pod \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.855586 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2af8481-6c64-4dc2-8028-b5a548dca4ff-etc-machine-id\") pod \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.855915 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-scripts\") pod \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.856017 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2af8481-6c64-4dc2-8028-b5a548dca4ff-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f2af8481-6c64-4dc2-8028-b5a548dca4ff" (UID: "f2af8481-6c64-4dc2-8028-b5a548dca4ff"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.856400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-db-sync-config-data\") pod \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.856587 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-combined-ca-bundle\") pod \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.856835 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-config-data\") pod \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\" (UID: \"f2af8481-6c64-4dc2-8028-b5a548dca4ff\") " Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.857876 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2af8481-6c64-4dc2-8028-b5a548dca4ff-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.871522 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f2af8481-6c64-4dc2-8028-b5a548dca4ff" (UID: "f2af8481-6c64-4dc2-8028-b5a548dca4ff"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.883657 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2af8481-6c64-4dc2-8028-b5a548dca4ff-kube-api-access-w4rpl" (OuterVolumeSpecName: "kube-api-access-w4rpl") pod "f2af8481-6c64-4dc2-8028-b5a548dca4ff" (UID: "f2af8481-6c64-4dc2-8028-b5a548dca4ff"). InnerVolumeSpecName "kube-api-access-w4rpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.883764 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-scripts" (OuterVolumeSpecName: "scripts") pod "f2af8481-6c64-4dc2-8028-b5a548dca4ff" (UID: "f2af8481-6c64-4dc2-8028-b5a548dca4ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.927520 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2af8481-6c64-4dc2-8028-b5a548dca4ff" (UID: "f2af8481-6c64-4dc2-8028-b5a548dca4ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.982163 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-config-data" (OuterVolumeSpecName: "config-data") pod "f2af8481-6c64-4dc2-8028-b5a548dca4ff" (UID: "f2af8481-6c64-4dc2-8028-b5a548dca4ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.984749 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.984773 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.984783 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.984791 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4rpl\" (UniqueName: \"kubernetes.io/projected/f2af8481-6c64-4dc2-8028-b5a548dca4ff-kube-api-access-w4rpl\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:54 crc kubenswrapper[4796]: I1212 04:53:54.984801 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2af8481-6c64-4dc2-8028-b5a548dca4ff-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.055196 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.147007 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.187528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vtpb\" (UniqueName: \"kubernetes.io/projected/67ffc83e-cc88-4963-9302-fa6c816ce4c4-kube-api-access-4vtpb\") pod \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.187620 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-sb\") pod \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.187671 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-svc\") pod \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.187710 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-nb\") pod \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.187756 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-swift-storage-0\") pod \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.187802 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-config\") pod \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\" (UID: \"67ffc83e-cc88-4963-9302-fa6c816ce4c4\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.205361 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ffc83e-cc88-4963-9302-fa6c816ce4c4-kube-api-access-4vtpb" (OuterVolumeSpecName: "kube-api-access-4vtpb") pod "67ffc83e-cc88-4963-9302-fa6c816ce4c4" (UID: "67ffc83e-cc88-4963-9302-fa6c816ce4c4"). InnerVolumeSpecName "kube-api-access-4vtpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.255357 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67ffc83e-cc88-4963-9302-fa6c816ce4c4" (UID: "67ffc83e-cc88-4963-9302-fa6c816ce4c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.257875 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-config" (OuterVolumeSpecName: "config") pod "67ffc83e-cc88-4963-9302-fa6c816ce4c4" (UID: "67ffc83e-cc88-4963-9302-fa6c816ce4c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.275694 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67ffc83e-cc88-4963-9302-fa6c816ce4c4" (UID: "67ffc83e-cc88-4963-9302-fa6c816ce4c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.288875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1aa5f34c-65ff-426f-9752-e88125dc10aa-horizon-secret-key\") pod \"1aa5f34c-65ff-426f-9752-e88125dc10aa\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.288957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hhld\" (UniqueName: \"kubernetes.io/projected/1aa5f34c-65ff-426f-9752-e88125dc10aa-kube-api-access-9hhld\") pod \"1aa5f34c-65ff-426f-9752-e88125dc10aa\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa5f34c-65ff-426f-9752-e88125dc10aa-logs\") pod \"1aa5f34c-65ff-426f-9752-e88125dc10aa\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289080 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-config-data\") pod \"1aa5f34c-65ff-426f-9752-e88125dc10aa\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289101 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-scripts\") pod \"1aa5f34c-65ff-426f-9752-e88125dc10aa\" (UID: \"1aa5f34c-65ff-426f-9752-e88125dc10aa\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289477 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289562 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vtpb\" (UniqueName: \"kubernetes.io/projected/67ffc83e-cc88-4963-9302-fa6c816ce4c4-kube-api-access-4vtpb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289572 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.289580 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.290113 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa5f34c-65ff-426f-9752-e88125dc10aa-logs" (OuterVolumeSpecName: "logs") pod "1aa5f34c-65ff-426f-9752-e88125dc10aa" (UID: "1aa5f34c-65ff-426f-9752-e88125dc10aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.302652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa5f34c-65ff-426f-9752-e88125dc10aa-kube-api-access-9hhld" (OuterVolumeSpecName: "kube-api-access-9hhld") pod "1aa5f34c-65ff-426f-9752-e88125dc10aa" (UID: "1aa5f34c-65ff-426f-9752-e88125dc10aa"). InnerVolumeSpecName "kube-api-access-9hhld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.302698 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa5f34c-65ff-426f-9752-e88125dc10aa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1aa5f34c-65ff-426f-9752-e88125dc10aa" (UID: "1aa5f34c-65ff-426f-9752-e88125dc10aa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.323804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67ffc83e-cc88-4963-9302-fa6c816ce4c4" (UID: "67ffc83e-cc88-4963-9302-fa6c816ce4c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.326105 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "67ffc83e-cc88-4963-9302-fa6c816ce4c4" (UID: "67ffc83e-cc88-4963-9302-fa6c816ce4c4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.348300 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-scripts" (OuterVolumeSpecName: "scripts") pod "1aa5f34c-65ff-426f-9752-e88125dc10aa" (UID: "1aa5f34c-65ff-426f-9752-e88125dc10aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.354851 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-config-data" (OuterVolumeSpecName: "config-data") pod "1aa5f34c-65ff-426f-9752-e88125dc10aa" (UID: "1aa5f34c-65ff-426f-9752-e88125dc10aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395332 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1aa5f34c-65ff-426f-9752-e88125dc10aa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395365 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hhld\" (UniqueName: \"kubernetes.io/projected/1aa5f34c-65ff-426f-9752-e88125dc10aa-kube-api-access-9hhld\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395375 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa5f34c-65ff-426f-9752-e88125dc10aa-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395384 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395392 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1aa5f34c-65ff-426f-9752-e88125dc10aa-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395400 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.395408 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67ffc83e-cc88-4963-9302-fa6c816ce4c4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.452616 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.453162 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f5d564649-v799z" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.488536 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.508346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d574cc9cf-zb7lp" event={"ID":"1aa5f34c-65ff-426f-9752-e88125dc10aa","Type":"ContainerDied","Data":"0ed8fbb8910f73c0717c0ac4c11a6ac0ffcb87a42459221249d48458b0b3cefd"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.508569 4796 scope.go:117] "RemoveContainer" containerID="9704cbb1a5a3165dec6b30740489ec3a21f6ad75c407d2e3f53686b3bd8e8226" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.508683 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d574cc9cf-zb7lp" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.519592 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686cc94ff9-lggnr" event={"ID":"3f5d7375-11fd-43ae-84a7-13fc0be7f11c","Type":"ContainerDied","Data":"e92affef2c3dc7883ae5270b50468c156647b8cac5b5ab756125de3a072813fc"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.519695 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686cc94ff9-lggnr" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.554467 4796 generic.go:334] "Generic (PLEG): container finished" podID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerID="a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d" exitCode=0 Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.554542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d559ddcdd-7chvv" event={"ID":"856e3496-fbc0-4a54-ad91-6f49e3777130","Type":"ContainerDied","Data":"a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.554568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d559ddcdd-7chvv" event={"ID":"856e3496-fbc0-4a54-ad91-6f49e3777130","Type":"ContainerDied","Data":"a7a169a26e51abfe3725abd64b675fd5ae414b74b757b6d5130987b24aff2845"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.554633 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d559ddcdd-7chvv" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.565111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f5d564649-v799z" event={"ID":"74b8adc6-3f38-4bb0-92bb-3ba777872a01","Type":"ContainerDied","Data":"efbd7a3eaf188f43ef4634860ab8643fae07b35565c214ee85ec117a0abce39f"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.565202 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f5d564649-v799z" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.573089 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-s242x" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.573130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-s242x" event={"ID":"67ffc83e-cc88-4963-9302-fa6c816ce4c4","Type":"ContainerDied","Data":"cd2ca5edb59133d7c3ecbf8741e2aad2885e0d36b0a2b0b5c8cfe2b62866cacd"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.583413 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lr2pq" event={"ID":"f2af8481-6c64-4dc2-8028-b5a548dca4ff","Type":"ContainerDied","Data":"b4f3e0c424c521bcb7ff6783c36e17f2473ffc080bd6520ec555cc5c3c816f7f"} Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.583453 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f3e0c424c521bcb7ff6783c36e17f2473ffc080bd6520ec555cc5c3c816f7f" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.583524 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="ceilometer-notification-agent" containerID="cri-o://cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1" gracePeriod=30 Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.583626 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lr2pq" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.584832 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="sg-core" containerID="cri-o://362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe" gracePeriod=30 Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-scripts\") pod \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607209 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-config-data\") pod \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data\") pod \"856e3496-fbc0-4a54-ad91-6f49e3777130\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607289 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-scripts\") pod \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607304 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b8adc6-3f38-4bb0-92bb-3ba777872a01-horizon-secret-key\") pod \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data-custom\") pod \"856e3496-fbc0-4a54-ad91-6f49e3777130\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607357 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcfjd\" (UniqueName: \"kubernetes.io/projected/856e3496-fbc0-4a54-ad91-6f49e3777130-kube-api-access-lcfjd\") pod \"856e3496-fbc0-4a54-ad91-6f49e3777130\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607376 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-combined-ca-bundle\") pod \"856e3496-fbc0-4a54-ad91-6f49e3777130\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856e3496-fbc0-4a54-ad91-6f49e3777130-logs\") pod \"856e3496-fbc0-4a54-ad91-6f49e3777130\" (UID: \"856e3496-fbc0-4a54-ad91-6f49e3777130\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607431 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b8adc6-3f38-4bb0-92bb-3ba777872a01-logs\") pod \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607484 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-config-data\") pod \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-horizon-secret-key\") pod \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607582 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhqm\" (UniqueName: \"kubernetes.io/projected/74b8adc6-3f38-4bb0-92bb-3ba777872a01-kube-api-access-5nhqm\") pod \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\" (UID: \"74b8adc6-3f38-4bb0-92bb-3ba777872a01\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607611 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndk8k\" (UniqueName: \"kubernetes.io/projected/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-kube-api-access-ndk8k\") pod \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.607630 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-logs\") pod \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\" (UID: \"3f5d7375-11fd-43ae-84a7-13fc0be7f11c\") " Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.611889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "856e3496-fbc0-4a54-ad91-6f49e3777130" (UID: "856e3496-fbc0-4a54-ad91-6f49e3777130"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.616376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-logs" (OuterVolumeSpecName: "logs") pod "3f5d7375-11fd-43ae-84a7-13fc0be7f11c" (UID: "3f5d7375-11fd-43ae-84a7-13fc0be7f11c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.628406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3f5d7375-11fd-43ae-84a7-13fc0be7f11c" (UID: "3f5d7375-11fd-43ae-84a7-13fc0be7f11c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.628741 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b8adc6-3f38-4bb0-92bb-3ba777872a01-logs" (OuterVolumeSpecName: "logs") pod "74b8adc6-3f38-4bb0-92bb-3ba777872a01" (UID: "74b8adc6-3f38-4bb0-92bb-3ba777872a01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.628987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856e3496-fbc0-4a54-ad91-6f49e3777130-logs" (OuterVolumeSpecName: "logs") pod "856e3496-fbc0-4a54-ad91-6f49e3777130" (UID: "856e3496-fbc0-4a54-ad91-6f49e3777130"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.630982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b8adc6-3f38-4bb0-92bb-3ba777872a01-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74b8adc6-3f38-4bb0-92bb-3ba777872a01" (UID: "74b8adc6-3f38-4bb0-92bb-3ba777872a01"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.652599 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b8adc6-3f38-4bb0-92bb-3ba777872a01-kube-api-access-5nhqm" (OuterVolumeSpecName: "kube-api-access-5nhqm") pod "74b8adc6-3f38-4bb0-92bb-3ba777872a01" (UID: "74b8adc6-3f38-4bb0-92bb-3ba777872a01"). InnerVolumeSpecName "kube-api-access-5nhqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.657998 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-kube-api-access-ndk8k" (OuterVolumeSpecName: "kube-api-access-ndk8k") pod "3f5d7375-11fd-43ae-84a7-13fc0be7f11c" (UID: "3f5d7375-11fd-43ae-84a7-13fc0be7f11c"). InnerVolumeSpecName "kube-api-access-ndk8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.667663 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856e3496-fbc0-4a54-ad91-6f49e3777130-kube-api-access-lcfjd" (OuterVolumeSpecName: "kube-api-access-lcfjd") pod "856e3496-fbc0-4a54-ad91-6f49e3777130" (UID: "856e3496-fbc0-4a54-ad91-6f49e3777130"). InnerVolumeSpecName "kube-api-access-lcfjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.703518 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-scripts" (OuterVolumeSpecName: "scripts") pod "74b8adc6-3f38-4bb0-92bb-3ba777872a01" (UID: "74b8adc6-3f38-4bb0-92bb-3ba777872a01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709650 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709686 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhqm\" (UniqueName: \"kubernetes.io/projected/74b8adc6-3f38-4bb0-92bb-3ba777872a01-kube-api-access-5nhqm\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709697 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndk8k\" (UniqueName: \"kubernetes.io/projected/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-kube-api-access-ndk8k\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709727 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709740 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709748 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b8adc6-3f38-4bb0-92bb-3ba777872a01-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709757 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709765 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcfjd\" (UniqueName: \"kubernetes.io/projected/856e3496-fbc0-4a54-ad91-6f49e3777130-kube-api-access-lcfjd\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709775 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856e3496-fbc0-4a54-ad91-6f49e3777130-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.709782 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b8adc6-3f38-4bb0-92bb-3ba777872a01-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.721090 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-config-data" (OuterVolumeSpecName: "config-data") pod "3f5d7375-11fd-43ae-84a7-13fc0be7f11c" (UID: "3f5d7375-11fd-43ae-84a7-13fc0be7f11c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.762474 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d574cc9cf-zb7lp"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.780409 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d574cc9cf-zb7lp"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.786335 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-scripts" (OuterVolumeSpecName: "scripts") pod "3f5d7375-11fd-43ae-84a7-13fc0be7f11c" (UID: "3f5d7375-11fd-43ae-84a7-13fc0be7f11c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.786414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856e3496-fbc0-4a54-ad91-6f49e3777130" (UID: "856e3496-fbc0-4a54-ad91-6f49e3777130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.790352 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-s242x"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.800901 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-s242x"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.805954 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-config-data" (OuterVolumeSpecName: "config-data") pod "74b8adc6-3f38-4bb0-92bb-3ba777872a01" (UID: "74b8adc6-3f38-4bb0-92bb-3ba777872a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.811563 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.811592 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f5d7375-11fd-43ae-84a7-13fc0be7f11c-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.811602 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.811613 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b8adc6-3f38-4bb0-92bb-3ba777872a01-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.827509 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data" (OuterVolumeSpecName: "config-data") pod "856e3496-fbc0-4a54-ad91-6f49e3777130" (UID: "856e3496-fbc0-4a54-ad91-6f49e3777130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.866542 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-686cc94ff9-lggnr"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.873991 4796 scope.go:117] "RemoveContainer" containerID="629b68b71fe14ce58251ff0d7c51790c7ccc76df1a1e69de5364353e78130f16" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.881967 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-686cc94ff9-lggnr"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.912826 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856e3496-fbc0-4a54-ad91-6f49e3777130-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.930555 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d559ddcdd-7chvv"] Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.944644 4796 scope.go:117] "RemoveContainer" containerID="c5e97ad5d577e122c519f0f2b0aa815027d78bf25fecec0251969c772a265e6f" Dec 12 04:53:55 crc kubenswrapper[4796]: I1212 04:53:55.972409 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d559ddcdd-7chvv"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.006368 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f5d564649-v799z"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.018680 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f5d564649-v799z"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075485 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075816 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="init" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075831 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="init" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075841 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075848 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075863 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075871 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075898 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075904 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075915 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075920 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075934 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="dnsmasq-dns" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075940 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="dnsmasq-dns" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075954 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075960 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075972 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075978 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.075988 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.075994 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.076011 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" containerName="cinder-db-sync" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.076018 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" containerName="cinder-db-sync" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.076030 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.076036 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083056 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083177 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083194 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" containerName="dnsmasq-dns" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083211 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083230 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083246 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083264 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" containerName="barbican-api-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083298 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" containerName="cinder-db-sync" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083318 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.083332 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" containerName="horizon-log" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.101923 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.117933 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.120726 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.123722 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.181960 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fdvrn" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.195217 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.220011 4796 scope.go:117] "RemoveContainer" containerID="7f4a68101ccb948dcda6f256161f64966521a1e7eac978e5e55c63b48eb74bc1" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.250015 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.276642 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d5168-932a-4dcd-aca5-8a3669525d4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.276761 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb4dk\" (UniqueName: \"kubernetes.io/projected/4c9d5168-932a-4dcd-aca5-8a3669525d4f-kube-api-access-zb4dk\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.276879 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.267939 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gg9mc"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.277036 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.277079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.274812 4796 scope.go:117] "RemoveContainer" containerID="a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.281948 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.342156 4796 scope.go:117] "RemoveContainer" containerID="fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.375829 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gg9mc"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381135 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381173 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvp4\" (UniqueName: \"kubernetes.io/projected/1ae63b03-d161-4745-9912-afab23ec6f09-kube-api-access-xlvp4\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381197 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381231 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d5168-932a-4dcd-aca5-8a3669525d4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381295 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-config\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381322 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb4dk\" (UniqueName: \"kubernetes.io/projected/4c9d5168-932a-4dcd-aca5-8a3669525d4f-kube-api-access-zb4dk\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.381413 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.387394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.391755 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.394641 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.394901 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d5168-932a-4dcd-aca5-8a3669525d4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.396114 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.399155 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.404599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.405140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.411526 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.420845 4796 scope.go:117] "RemoveContainer" containerID="a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.421043 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb4dk\" (UniqueName: \"kubernetes.io/projected/4c9d5168-932a-4dcd-aca5-8a3669525d4f-kube-api-access-zb4dk\") pod \"cinder-scheduler-0\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.423833 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d\": container with ID starting with a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d not found: ID does not exist" containerID="a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.423869 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d"} err="failed to get container status \"a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d\": rpc error: code = NotFound desc = could not find container \"a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d\": container with ID starting with a73b6d589f42e7d94f6da275f798aab381f1ba697ac2bb42a7f6dff6aa01ec0d not found: ID does not exist" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.423895 4796 scope.go:117] "RemoveContainer" containerID="fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41" Dec 12 04:53:56 crc kubenswrapper[4796]: E1212 04:53:56.429404 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41\": container with ID starting with fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41 not found: ID does not exist" containerID="fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.429469 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41"} err="failed to get container status \"fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41\": rpc error: code = NotFound desc = could not find container \"fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41\": container with ID starting with fa70beea5d2c041846f06c7cac29ec4168726deebe55cfed590fe0659f67cd41 not found: ID does not exist" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.429515 4796 scope.go:117] "RemoveContainer" containerID="351ae85d2151efcd64bc413a91a5b2572ee84ab20e3c036c6546cb6e50fe8c3d" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.467372 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.484819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.484882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.484932 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.484955 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.484994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485028 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13c7a61-e620-4151-99ca-a552eff1e8d7-logs\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czb2s\" (UniqueName: \"kubernetes.io/projected/e13c7a61-e620-4151-99ca-a552eff1e8d7-kube-api-access-czb2s\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485093 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvp4\" (UniqueName: \"kubernetes.io/projected/1ae63b03-d161-4745-9912-afab23ec6f09-kube-api-access-xlvp4\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485118 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e13c7a61-e620-4151-99ca-a552eff1e8d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485163 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-scripts\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485194 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.485216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-config\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.486752 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.487492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.488251 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.489810 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.491434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-config\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.511417 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvp4\" (UniqueName: \"kubernetes.io/projected/1ae63b03-d161-4745-9912-afab23ec6f09-kube-api-access-xlvp4\") pod \"dnsmasq-dns-6578955fd5-gg9mc\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586737 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13c7a61-e620-4151-99ca-a552eff1e8d7-logs\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czb2s\" (UniqueName: \"kubernetes.io/projected/e13c7a61-e620-4151-99ca-a552eff1e8d7-kube-api-access-czb2s\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586827 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e13c7a61-e620-4151-99ca-a552eff1e8d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.586854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-scripts\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.588200 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13c7a61-e620-4151-99ca-a552eff1e8d7-logs\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.588304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e13c7a61-e620-4151-99ca-a552eff1e8d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.592808 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.594952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.602379 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-scripts\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.602613 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.630001 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czb2s\" (UniqueName: \"kubernetes.io/projected/e13c7a61-e620-4151-99ca-a552eff1e8d7-kube-api-access-czb2s\") pod \"cinder-api-0\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.660372 4796 scope.go:117] "RemoveContainer" containerID="a11c4626af6e915974023ce48dafd028144fd2c6d2589364b594e48654695020" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.670897 4796 generic.go:334] "Generic (PLEG): container finished" podID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerID="362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe" exitCode=2 Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.670954 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c16aa3d8-e979-4370-bda3-22d68070a7ff","Type":"ContainerDied","Data":"362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe"} Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.672839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.729636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.806996 4796 scope.go:117] "RemoveContainer" containerID="1e6d3d06bbffcd062d9f7c892097f439939c15d7a4282406b6e127470126df33" Dec 12 04:53:56 crc kubenswrapper[4796]: I1212 04:53:56.851921 4796 scope.go:117] "RemoveContainer" containerID="73f32a9f2b54e5cb1cfba4589990c682470e1743689a9aeda2a1d7b8e815874c" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.021612 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.021692 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.022557 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7ee4b76a2712ab615b271101c7888ecca69c8d06360d3dab11046c4fb8bfb928"} pod="openstack/horizon-6cb55bccb4-z8p6q" containerMessage="Container horizon failed startup probe, will be restarted" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.022590 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" containerID="cri-o://7ee4b76a2712ab615b271101c7888ecca69c8d06360d3dab11046c4fb8bfb928" gracePeriod=30 Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.097314 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.097402 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.098571 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"70b9c9eddbf4a440dcf231af081331ddd22ee3f9a6479629ac84e9ef933ac6f0"} pod="openstack/horizon-67764d6b9b-h7fdk" containerMessage="Container horizon failed startup probe, will be restarted" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.098611 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" containerID="cri-o://70b9c9eddbf4a440dcf231af081331ddd22ee3f9a6479629ac84e9ef933ac6f0" gracePeriod=30 Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.219000 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.313783 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-combined-ca-bundle\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314009 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-sg-core-conf-yaml\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314176 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-run-httpd\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314262 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-log-httpd\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314364 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ktwr\" (UniqueName: \"kubernetes.io/projected/c16aa3d8-e979-4370-bda3-22d68070a7ff-kube-api-access-2ktwr\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314438 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-config-data\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314510 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-scripts\") pod \"c16aa3d8-e979-4370-bda3-22d68070a7ff\" (UID: \"c16aa3d8-e979-4370-bda3-22d68070a7ff\") " Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.314638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.315179 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.322785 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.362185 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16aa3d8-e979-4370-bda3-22d68070a7ff-kube-api-access-2ktwr" (OuterVolumeSpecName: "kube-api-access-2ktwr") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "kube-api-access-2ktwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.362387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-scripts" (OuterVolumeSpecName: "scripts") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.396891 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.397602 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-config-data" (OuterVolumeSpecName: "config-data") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.416619 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.417044 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c16aa3d8-e979-4370-bda3-22d68070a7ff-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.417129 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ktwr\" (UniqueName: \"kubernetes.io/projected/c16aa3d8-e979-4370-bda3-22d68070a7ff-kube-api-access-2ktwr\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.417221 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.417310 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.433266 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c16aa3d8-e979-4370-bda3-22d68070a7ff" (UID: "c16aa3d8-e979-4370-bda3-22d68070a7ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.465824 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa5f34c-65ff-426f-9752-e88125dc10aa" path="/var/lib/kubelet/pods/1aa5f34c-65ff-426f-9752-e88125dc10aa/volumes" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.466563 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5d7375-11fd-43ae-84a7-13fc0be7f11c" path="/var/lib/kubelet/pods/3f5d7375-11fd-43ae-84a7-13fc0be7f11c/volumes" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.472008 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ffc83e-cc88-4963-9302-fa6c816ce4c4" path="/var/lib/kubelet/pods/67ffc83e-cc88-4963-9302-fa6c816ce4c4/volumes" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.472659 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b8adc6-3f38-4bb0-92bb-3ba777872a01" path="/var/lib/kubelet/pods/74b8adc6-3f38-4bb0-92bb-3ba777872a01/volumes" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.473254 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856e3496-fbc0-4a54-ad91-6f49e3777130" path="/var/lib/kubelet/pods/856e3496-fbc0-4a54-ad91-6f49e3777130/volumes" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.484776 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gg9mc"] Dec 12 04:53:57 crc kubenswrapper[4796]: W1212 04:53:57.497024 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9d5168_932a_4dcd_aca5_8a3669525d4f.slice/crio-85f4021a29bf80f9e0e753834be0c698f6c1c5a8a74011dc9ffa5ce9e0a151ea WatchSource:0}: Error finding container 85f4021a29bf80f9e0e753834be0c698f6c1c5a8a74011dc9ffa5ce9e0a151ea: Status 404 returned error can't find the container with id 85f4021a29bf80f9e0e753834be0c698f6c1c5a8a74011dc9ffa5ce9e0a151ea Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.507357 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.523367 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c16aa3d8-e979-4370-bda3-22d68070a7ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.737568 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.741009 4796 generic.go:334] "Generic (PLEG): container finished" podID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerID="cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1" exitCode=0 Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.741078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c16aa3d8-e979-4370-bda3-22d68070a7ff","Type":"ContainerDied","Data":"cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1"} Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.741107 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c16aa3d8-e979-4370-bda3-22d68070a7ff","Type":"ContainerDied","Data":"262bd0a6ff80c3dedf09497bf716b58c407335fc693063e4aaf136e35b690b1b"} Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.741126 4796 scope.go:117] "RemoveContainer" containerID="362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.741220 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.746334 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4c9d5168-932a-4dcd-aca5-8a3669525d4f","Type":"ContainerStarted","Data":"85f4021a29bf80f9e0e753834be0c698f6c1c5a8a74011dc9ffa5ce9e0a151ea"} Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.747469 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" event={"ID":"1ae63b03-d161-4745-9912-afab23ec6f09","Type":"ContainerStarted","Data":"13aeafce32de65f886ae72adf52cdc38050d220fdc3af5cb451522dbdad422ce"} Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.793192 4796 scope.go:117] "RemoveContainer" containerID="cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.855349 4796 scope.go:117] "RemoveContainer" containerID="362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe" Dec 12 04:53:57 crc kubenswrapper[4796]: E1212 04:53:57.855695 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe\": container with ID starting with 362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe not found: ID does not exist" containerID="362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.855722 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe"} err="failed to get container status \"362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe\": rpc error: code = NotFound desc = could not find container \"362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe\": container with ID starting with 362425834e4b99f15b4502012804380336c711223d8b799cea8c8e70acbf7bbe not found: ID does not exist" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.855741 4796 scope.go:117] "RemoveContainer" containerID="cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1" Dec 12 04:53:57 crc kubenswrapper[4796]: E1212 04:53:57.855989 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1\": container with ID starting with cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1 not found: ID does not exist" containerID="cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.856009 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1"} err="failed to get container status \"cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1\": rpc error: code = NotFound desc = could not find container \"cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1\": container with ID starting with cde66636346d77cd2cd9abcf68fac86ec2988742aa54f5ac9798243cd170f6d1 not found: ID does not exist" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.911621 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.917685 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.942446 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:53:57 crc kubenswrapper[4796]: E1212 04:53:57.943012 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="sg-core" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.943035 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="sg-core" Dec 12 04:53:57 crc kubenswrapper[4796]: E1212 04:53:57.943061 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="ceilometer-notification-agent" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.943071 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="ceilometer-notification-agent" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.943287 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="sg-core" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.943327 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" containerName="ceilometer-notification-agent" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.945163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.949915 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.954972 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:53:57 crc kubenswrapper[4796]: I1212 04:53:57.955157 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039506 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m58hb\" (UniqueName: \"kubernetes.io/projected/859caf6a-afe4-4ac1-b43e-f80ca4276b95-kube-api-access-m58hb\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039563 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-scripts\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-run-httpd\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-config-data\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039722 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-log-httpd\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039796 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.039829 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.141747 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.141879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.141930 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m58hb\" (UniqueName: \"kubernetes.io/projected/859caf6a-afe4-4ac1-b43e-f80ca4276b95-kube-api-access-m58hb\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.141954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-scripts\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.141997 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-run-httpd\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.142025 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-config-data\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.142068 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-log-httpd\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.142599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-log-httpd\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.146570 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-run-httpd\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.149617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-config-data\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.153901 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.154994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.158620 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-scripts\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.161388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m58hb\" (UniqueName: \"kubernetes.io/projected/859caf6a-afe4-4ac1-b43e-f80ca4276b95-kube-api-access-m58hb\") pod \"ceilometer-0\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.268882 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.794791 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.834725 4796 generic.go:334] "Generic (PLEG): container finished" podID="1ae63b03-d161-4745-9912-afab23ec6f09" containerID="df4b693b8018f4695f716f1cd278e2d1de4026c737fdc1b6e20157375a0a7fdb" exitCode=0 Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.834839 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" event={"ID":"1ae63b03-d161-4745-9912-afab23ec6f09","Type":"ContainerDied","Data":"df4b693b8018f4695f716f1cd278e2d1de4026c737fdc1b6e20157375a0a7fdb"} Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.840213 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e13c7a61-e620-4151-99ca-a552eff1e8d7","Type":"ContainerStarted","Data":"79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2"} Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.840260 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e13c7a61-e620-4151-99ca-a552eff1e8d7","Type":"ContainerStarted","Data":"9e591b2266eeb087ea5b42550dc61703148c6369cedc4eb13fe4a465bc46825a"} Dec 12 04:53:58 crc kubenswrapper[4796]: I1212 04:53:58.996713 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.437730 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16aa3d8-e979-4370-bda3-22d68070a7ff" path="/var/lib/kubelet/pods/c16aa3d8-e979-4370-bda3-22d68070a7ff/volumes" Dec 12 04:53:59 crc kubenswrapper[4796]: E1212 04:53:59.739361 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9082485_1887_4b6d_8e1f_371825f61dfc.slice/crio-4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9\": RecentStats: unable to find data in memory cache]" Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.883065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerStarted","Data":"73ac616a8af17ce62ab235447b7b7f700b516037dead886571ca084a5d825916"} Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.883105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerStarted","Data":"f92ce8bdcdcc126f2988b3dc136f1d65c4779134a6080a934d706808dee8d7a5"} Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.896189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e13c7a61-e620-4151-99ca-a552eff1e8d7","Type":"ContainerStarted","Data":"37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261"} Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.896365 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api-log" containerID="cri-o://79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2" gracePeriod=30 Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.896604 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.896825 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api" containerID="cri-o://37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261" gracePeriod=30 Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.904976 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4c9d5168-932a-4dcd-aca5-8a3669525d4f","Type":"ContainerStarted","Data":"c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639"} Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.915336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" event={"ID":"1ae63b03-d161-4745-9912-afab23ec6f09","Type":"ContainerStarted","Data":"3e2595988077f516c051fa6033e3e6fe03bee4de665a43d4b8a9143013218cd8"} Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.915560 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.923573 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.923555328 podStartE2EDuration="3.923555328s" podCreationTimestamp="2025-12-12 04:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:59.911357257 +0000 UTC m=+1230.787374404" watchObservedRunningTime="2025-12-12 04:53:59.923555328 +0000 UTC m=+1230.799572475" Dec 12 04:53:59 crc kubenswrapper[4796]: I1212 04:53:59.949293 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" podStartSLOduration=3.949260731 podStartE2EDuration="3.949260731s" podCreationTimestamp="2025-12-12 04:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:53:59.945870005 +0000 UTC m=+1230.821887152" watchObservedRunningTime="2025-12-12 04:53:59.949260731 +0000 UTC m=+1230.825277878" Dec 12 04:54:00 crc kubenswrapper[4796]: I1212 04:54:00.924015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerStarted","Data":"12a13cd636a9c2e58197849fee84f6fd00f7b7a6b23ada01c889f7f9bf29ede0"} Dec 12 04:54:00 crc kubenswrapper[4796]: I1212 04:54:00.926061 4796 generic.go:334] "Generic (PLEG): container finished" podID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerID="79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2" exitCode=143 Dec 12 04:54:00 crc kubenswrapper[4796]: I1212 04:54:00.926132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e13c7a61-e620-4151-99ca-a552eff1e8d7","Type":"ContainerDied","Data":"79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2"} Dec 12 04:54:00 crc kubenswrapper[4796]: I1212 04:54:00.928949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4c9d5168-932a-4dcd-aca5-8a3669525d4f","Type":"ContainerStarted","Data":"3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5"} Dec 12 04:54:00 crc kubenswrapper[4796]: I1212 04:54:00.954998 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.016518731 podStartE2EDuration="4.954978971s" podCreationTimestamp="2025-12-12 04:53:56 +0000 UTC" firstStartedPulling="2025-12-12 04:53:57.518967384 +0000 UTC m=+1228.394984531" lastFinishedPulling="2025-12-12 04:53:58.457427624 +0000 UTC m=+1229.333444771" observedRunningTime="2025-12-12 04:54:00.945401181 +0000 UTC m=+1231.821418348" watchObservedRunningTime="2025-12-12 04:54:00.954978971 +0000 UTC m=+1231.830996118" Dec 12 04:54:01 crc kubenswrapper[4796]: I1212 04:54:01.468352 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 04:54:01 crc kubenswrapper[4796]: I1212 04:54:01.940210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerStarted","Data":"8e3b10dff5228c2140191388cc92477c908df8cbfbfa54dacc7a9ec9f9f49254"} Dec 12 04:54:02 crc kubenswrapper[4796]: I1212 04:54:02.970072 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:54:02 crc kubenswrapper[4796]: I1212 04:54:02.970422 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:54:03 crc kubenswrapper[4796]: I1212 04:54:03.958603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerStarted","Data":"354d3f52be45c9c8569c7fee4a9b7375b9480086224da440f53633e77ff0c62a"} Dec 12 04:54:03 crc kubenswrapper[4796]: I1212 04:54:03.958775 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 04:54:04 crc kubenswrapper[4796]: I1212 04:54:04.467191 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:54:04 crc kubenswrapper[4796]: I1212 04:54:04.468048 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f8ffdd64b-7gmkf" Dec 12 04:54:04 crc kubenswrapper[4796]: I1212 04:54:04.497002 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.60268953 podStartE2EDuration="7.496983006s" podCreationTimestamp="2025-12-12 04:53:57 +0000 UTC" firstStartedPulling="2025-12-12 04:53:58.883440716 +0000 UTC m=+1229.759457863" lastFinishedPulling="2025-12-12 04:54:02.777734192 +0000 UTC m=+1233.653751339" observedRunningTime="2025-12-12 04:54:03.996507612 +0000 UTC m=+1234.872524759" watchObservedRunningTime="2025-12-12 04:54:04.496983006 +0000 UTC m=+1235.373000163" Dec 12 04:54:05 crc kubenswrapper[4796]: I1212 04:54:05.332914 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-86bc7ff485-lzxvk" Dec 12 04:54:06 crc kubenswrapper[4796]: I1212 04:54:06.675470 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:54:06 crc kubenswrapper[4796]: I1212 04:54:06.751419 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zcggl"] Dec 12 04:54:06 crc kubenswrapper[4796]: I1212 04:54:06.751632 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerName="dnsmasq-dns" containerID="cri-o://37293813af3e9850dbbe2da03c9f21305146e78b0576a89c6027723132c9cd8d" gracePeriod=10 Dec 12 04:54:06 crc kubenswrapper[4796]: I1212 04:54:06.765983 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 12 04:54:06 crc kubenswrapper[4796]: I1212 04:54:06.886098 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.004132 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerID="37293813af3e9850dbbe2da03c9f21305146e78b0576a89c6027723132c9cd8d" exitCode=0 Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.005059 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="cinder-scheduler" containerID="cri-o://c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639" gracePeriod=30 Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.004367 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" event={"ID":"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa","Type":"ContainerDied","Data":"37293813af3e9850dbbe2da03c9f21305146e78b0576a89c6027723132c9cd8d"} Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.005720 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="probe" containerID="cri-o://3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5" gracePeriod=30 Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.389251 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.542602 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 12 04:54:07 crc kubenswrapper[4796]: E1212 04:54:07.543043 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerName="dnsmasq-dns" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.543060 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerName="dnsmasq-dns" Dec 12 04:54:07 crc kubenswrapper[4796]: E1212 04:54:07.543073 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerName="init" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.543079 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerName="init" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.543294 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" containerName="dnsmasq-dns" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.543899 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.548731 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.548916 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-t26w5" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.549362 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.555071 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.559888 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-sb\") pod \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560002 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-nb\") pod \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560042 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-config\") pod \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560085 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-swift-storage-0\") pod \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck7fb\" (UniqueName: \"kubernetes.io/projected/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-kube-api-access-ck7fb\") pod \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560140 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-svc\") pod \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\" (UID: \"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa\") " Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnsq6\" (UniqueName: \"kubernetes.io/projected/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-kube-api-access-mnsq6\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560450 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560504 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-openstack-config\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.560633 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-openstack-config-secret\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.611941 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-kube-api-access-ck7fb" (OuterVolumeSpecName: "kube-api-access-ck7fb") pod "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" (UID: "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa"). InnerVolumeSpecName "kube-api-access-ck7fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.663065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.663312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-openstack-config\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.663458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-openstack-config-secret\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.663996 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnsq6\" (UniqueName: \"kubernetes.io/projected/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-kube-api-access-mnsq6\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.664088 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck7fb\" (UniqueName: \"kubernetes.io/projected/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-kube-api-access-ck7fb\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.665821 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-openstack-config\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.679177 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.681881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-openstack-config-secret\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.690744 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnsq6\" (UniqueName: \"kubernetes.io/projected/9826fd92-e55e-487f-ac6a-73a3e7f4d88a-kube-api-access-mnsq6\") pod \"openstackclient\" (UID: \"9826fd92-e55e-487f-ac6a-73a3e7f4d88a\") " pod="openstack/openstackclient" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.723052 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" (UID: "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.733108 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" (UID: "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.737885 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-config" (OuterVolumeSpecName: "config") pod "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" (UID: "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.749696 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" (UID: "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.761537 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" (UID: "4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.765549 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.765720 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.765780 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.765853 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.765907 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:07 crc kubenswrapper[4796]: I1212 04:54:07.882971 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.017025 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.017025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-zcggl" event={"ID":"4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa","Type":"ContainerDied","Data":"dd3ee1a745c9ce81887c8d89ea509113b5aed481d1818a4bc6821f2a0eca5018"} Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.017144 4796 scope.go:117] "RemoveContainer" containerID="37293813af3e9850dbbe2da03c9f21305146e78b0576a89c6027723132c9cd8d" Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.028136 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerID="3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5" exitCode=0 Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.028351 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4c9d5168-932a-4dcd-aca5-8a3669525d4f","Type":"ContainerDied","Data":"3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5"} Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.070474 4796 scope.go:117] "RemoveContainer" containerID="96cfa3c0d964b9de0e0cabcaf33ae20e3b4fd3d7b5b4c7319c83c2bd038203a6" Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.074665 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zcggl"] Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.092560 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-zcggl"] Dec 12 04:54:08 crc kubenswrapper[4796]: I1212 04:54:08.436024 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 12 04:54:09 crc kubenswrapper[4796]: I1212 04:54:09.037925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9826fd92-e55e-487f-ac6a-73a3e7f4d88a","Type":"ContainerStarted","Data":"9c4b49284d6c84d19836e192722bd8ae8313254746d2090d63dcdba15c645c63"} Dec 12 04:54:09 crc kubenswrapper[4796]: I1212 04:54:09.423145 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa" path="/var/lib/kubelet/pods/4c7dfbe4-b481-4af9-8f9a-df84cb8a99fa/volumes" Dec 12 04:54:09 crc kubenswrapper[4796]: I1212 04:54:09.921447 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 12 04:54:10 crc kubenswrapper[4796]: E1212 04:54:10.007401 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9082485_1887_4b6d_8e1f_371825f61dfc.slice/crio-4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9\": RecentStats: unable to find data in memory cache]" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.072435 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.089887 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerID="c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639" exitCode=0 Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.089930 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4c9d5168-932a-4dcd-aca5-8a3669525d4f","Type":"ContainerDied","Data":"c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639"} Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.089955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4c9d5168-932a-4dcd-aca5-8a3669525d4f","Type":"ContainerDied","Data":"85f4021a29bf80f9e0e753834be0c698f6c1c5a8a74011dc9ffa5ce9e0a151ea"} Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.089971 4796 scope.go:117] "RemoveContainer" containerID="3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.090090 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.149526 4796 scope.go:117] "RemoveContainer" containerID="c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.177221 4796 scope.go:117] "RemoveContainer" containerID="3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5" Dec 12 04:54:12 crc kubenswrapper[4796]: E1212 04:54:12.177915 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5\": container with ID starting with 3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5 not found: ID does not exist" containerID="3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.177973 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5"} err="failed to get container status \"3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5\": rpc error: code = NotFound desc = could not find container \"3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5\": container with ID starting with 3e85cab9f1810c2f083feb3ed42e0b9b05a743c13d23cd22d2673c4dc0993ae5 not found: ID does not exist" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.178065 4796 scope.go:117] "RemoveContainer" containerID="c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639" Dec 12 04:54:12 crc kubenswrapper[4796]: E1212 04:54:12.178509 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639\": container with ID starting with c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639 not found: ID does not exist" containerID="c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.178558 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639"} err="failed to get container status \"c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639\": rpc error: code = NotFound desc = could not find container \"c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639\": container with ID starting with c2c00ab917f23e1b4759e777565225ec3a5fb0b55f63fc5ee1ac04897f438639 not found: ID does not exist" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.205199 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data\") pod \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.205268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d5168-932a-4dcd-aca5-8a3669525d4f-etc-machine-id\") pod \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.205400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb4dk\" (UniqueName: \"kubernetes.io/projected/4c9d5168-932a-4dcd-aca5-8a3669525d4f-kube-api-access-zb4dk\") pod \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.205449 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data-custom\") pod \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.205469 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-combined-ca-bundle\") pod \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.205553 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-scripts\") pod \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\" (UID: \"4c9d5168-932a-4dcd-aca5-8a3669525d4f\") " Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.206750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9d5168-932a-4dcd-aca5-8a3669525d4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4c9d5168-932a-4dcd-aca5-8a3669525d4f" (UID: "4c9d5168-932a-4dcd-aca5-8a3669525d4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.214509 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-scripts" (OuterVolumeSpecName: "scripts") pod "4c9d5168-932a-4dcd-aca5-8a3669525d4f" (UID: "4c9d5168-932a-4dcd-aca5-8a3669525d4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.218470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9d5168-932a-4dcd-aca5-8a3669525d4f-kube-api-access-zb4dk" (OuterVolumeSpecName: "kube-api-access-zb4dk") pod "4c9d5168-932a-4dcd-aca5-8a3669525d4f" (UID: "4c9d5168-932a-4dcd-aca5-8a3669525d4f"). InnerVolumeSpecName "kube-api-access-zb4dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.222462 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4c9d5168-932a-4dcd-aca5-8a3669525d4f" (UID: "4c9d5168-932a-4dcd-aca5-8a3669525d4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.285891 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c9d5168-932a-4dcd-aca5-8a3669525d4f" (UID: "4c9d5168-932a-4dcd-aca5-8a3669525d4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.307833 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d5168-932a-4dcd-aca5-8a3669525d4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.307863 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb4dk\" (UniqueName: \"kubernetes.io/projected/4c9d5168-932a-4dcd-aca5-8a3669525d4f-kube-api-access-zb4dk\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.307876 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.307883 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.307892 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.346384 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data" (OuterVolumeSpecName: "config-data") pod "4c9d5168-932a-4dcd-aca5-8a3669525d4f" (UID: "4c9d5168-932a-4dcd-aca5-8a3669525d4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.409112 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d5168-932a-4dcd-aca5-8a3669525d4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.431038 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.442441 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.482100 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:54:12 crc kubenswrapper[4796]: E1212 04:54:12.482530 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="cinder-scheduler" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.482548 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="cinder-scheduler" Dec 12 04:54:12 crc kubenswrapper[4796]: E1212 04:54:12.482579 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="probe" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.482587 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="probe" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.482775 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="probe" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.482797 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" containerName="cinder-scheduler" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.483776 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.490833 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.492236 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.615074 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.615179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.615204 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.615240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl4jq\" (UniqueName: \"kubernetes.io/projected/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-kube-api-access-xl4jq\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.615292 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.615310 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.716965 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl4jq\" (UniqueName: \"kubernetes.io/projected/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-kube-api-access-xl4jq\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.717400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.717423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.717468 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.717563 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.717584 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.723879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.724789 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.725541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.727365 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.728051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.744130 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl4jq\" (UniqueName: \"kubernetes.io/projected/ec3a4988-59e7-443a-bbf1-31cd16abdcd6-kube-api-access-xl4jq\") pod \"cinder-scheduler-0\" (UID: \"ec3a4988-59e7-443a-bbf1-31cd16abdcd6\") " pod="openstack/cinder-scheduler-0" Dec 12 04:54:12 crc kubenswrapper[4796]: I1212 04:54:12.807547 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 12 04:54:13 crc kubenswrapper[4796]: I1212 04:54:13.119569 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-68c4f9dc76-n9c9p" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.151:9696/\": dial tcp 10.217.0.151:9696: connect: connection refused" Dec 12 04:54:13 crc kubenswrapper[4796]: I1212 04:54:13.248873 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 12 04:54:13 crc kubenswrapper[4796]: W1212 04:54:13.266600 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec3a4988_59e7_443a_bbf1_31cd16abdcd6.slice/crio-fa79ad37d5f07828314676ca8eee4ccdd201363d43460252e758d1012210340a WatchSource:0}: Error finding container fa79ad37d5f07828314676ca8eee4ccdd201363d43460252e758d1012210340a: Status 404 returned error can't find the container with id fa79ad37d5f07828314676ca8eee4ccdd201363d43460252e758d1012210340a Dec 12 04:54:13 crc kubenswrapper[4796]: I1212 04:54:13.421006 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9d5168-932a-4dcd-aca5-8a3669525d4f" path="/var/lib/kubelet/pods/4c9d5168-932a-4dcd-aca5-8a3669525d4f/volumes" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.122892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a4988-59e7-443a-bbf1-31cd16abdcd6","Type":"ContainerStarted","Data":"fa79ad37d5f07828314676ca8eee4ccdd201363d43460252e758d1012210340a"} Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.442588 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.442843 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-central-agent" containerID="cri-o://73ac616a8af17ce62ab235447b7b7f700b516037dead886571ca084a5d825916" gracePeriod=30 Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.442906 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="sg-core" containerID="cri-o://8e3b10dff5228c2140191388cc92477c908df8cbfbfa54dacc7a9ec9f9f49254" gracePeriod=30 Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.442962 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-notification-agent" containerID="cri-o://12a13cd636a9c2e58197849fee84f6fd00f7b7a6b23ada01c889f7f9bf29ede0" gracePeriod=30 Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.443141 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="proxy-httpd" containerID="cri-o://354d3f52be45c9c8569c7fee4a9b7375b9480086224da440f53633e77ff0c62a" gracePeriod=30 Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.478312 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.780480 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58987c9f79-c2xlb"] Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.782189 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.790880 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.792524 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.792893 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.819047 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58987c9f79-c2xlb"] Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.868907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-run-httpd\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.871467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-internal-tls-certs\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.871622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-public-tls-certs\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.871775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-etc-swift\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.871898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-config-data\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.872015 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6mc7\" (UniqueName: \"kubernetes.io/projected/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-kube-api-access-w6mc7\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.873513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-log-httpd\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:14 crc kubenswrapper[4796]: I1212 04:54:14.873763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-combined-ca-bundle\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.034974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-combined-ca-bundle\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-run-httpd\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-internal-tls-certs\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035851 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-public-tls-certs\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-etc-swift\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035897 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-config-data\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035914 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6mc7\" (UniqueName: \"kubernetes.io/projected/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-kube-api-access-w6mc7\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.035950 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-log-httpd\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.036496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-log-httpd\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.036771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-run-httpd\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.044144 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-config-data\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.044942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-internal-tls-certs\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.045095 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-combined-ca-bundle\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.045655 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-public-tls-certs\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.052596 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-etc-swift\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.065004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6mc7\" (UniqueName: \"kubernetes.io/projected/80ea0a4a-0715-4d5b-be0c-e11f00e6d743-kube-api-access-w6mc7\") pod \"swift-proxy-58987c9f79-c2xlb\" (UID: \"80ea0a4a-0715-4d5b-be0c-e11f00e6d743\") " pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.142499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a4988-59e7-443a-bbf1-31cd16abdcd6","Type":"ContainerStarted","Data":"ae84ba383ac37c4630c90742c67df461dce53afb5bbe41f4061f9f7c5f7b4b24"} Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.146001 4796 generic.go:334] "Generic (PLEG): container finished" podID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerID="8e3b10dff5228c2140191388cc92477c908df8cbfbfa54dacc7a9ec9f9f49254" exitCode=2 Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.146030 4796 generic.go:334] "Generic (PLEG): container finished" podID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerID="73ac616a8af17ce62ab235447b7b7f700b516037dead886571ca084a5d825916" exitCode=0 Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.146048 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerDied","Data":"8e3b10dff5228c2140191388cc92477c908df8cbfbfa54dacc7a9ec9f9f49254"} Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.146069 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerDied","Data":"73ac616a8af17ce62ab235447b7b7f700b516037dead886571ca084a5d825916"} Dec 12 04:54:15 crc kubenswrapper[4796]: I1212 04:54:15.212552 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:16 crc kubenswrapper[4796]: I1212 04:54:16.162442 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68c4f9dc76-n9c9p_6c750273-c3b9-46b0-b884-422d779e73e3/neutron-api/0.log" Dec 12 04:54:16 crc kubenswrapper[4796]: I1212 04:54:16.162485 4796 generic.go:334] "Generic (PLEG): container finished" podID="6c750273-c3b9-46b0-b884-422d779e73e3" containerID="fb9efb26da5936ab195fd51951ea07162000999ef7bcec6abb416a860a74b1fa" exitCode=137 Dec 12 04:54:16 crc kubenswrapper[4796]: I1212 04:54:16.162557 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68c4f9dc76-n9c9p" event={"ID":"6c750273-c3b9-46b0-b884-422d779e73e3","Type":"ContainerDied","Data":"fb9efb26da5936ab195fd51951ea07162000999ef7bcec6abb416a860a74b1fa"} Dec 12 04:54:16 crc kubenswrapper[4796]: I1212 04:54:16.168376 4796 generic.go:334] "Generic (PLEG): container finished" podID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerID="354d3f52be45c9c8569c7fee4a9b7375b9480086224da440f53633e77ff0c62a" exitCode=0 Dec 12 04:54:16 crc kubenswrapper[4796]: I1212 04:54:16.168410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerDied","Data":"354d3f52be45c9c8569c7fee4a9b7375b9480086224da440f53633e77ff0c62a"} Dec 12 04:54:18 crc kubenswrapper[4796]: I1212 04:54:18.212833 4796 generic.go:334] "Generic (PLEG): container finished" podID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerID="12a13cd636a9c2e58197849fee84f6fd00f7b7a6b23ada01c889f7f9bf29ede0" exitCode=0 Dec 12 04:54:18 crc kubenswrapper[4796]: I1212 04:54:18.212878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerDied","Data":"12a13cd636a9c2e58197849fee84f6fd00f7b7a6b23ada01c889f7f9bf29ede0"} Dec 12 04:54:20 crc kubenswrapper[4796]: E1212 04:54:20.251207 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9082485_1887_4b6d_8e1f_371825f61dfc.slice/crio-4628a125e6a4f33d1f4c4ea98f81089bae1785f53c87fa8f49905ffa4fb422d9\": RecentStats: unable to find data in memory cache]" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.037779 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68c4f9dc76-n9c9p_6c750273-c3b9-46b0-b884-422d779e73e3/neutron-api/0.log" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.038263 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.160216 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-config\") pod \"6c750273-c3b9-46b0-b884-422d779e73e3\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.160309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-ovndb-tls-certs\") pod \"6c750273-c3b9-46b0-b884-422d779e73e3\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.160337 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-httpd-config\") pod \"6c750273-c3b9-46b0-b884-422d779e73e3\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.160381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-combined-ca-bundle\") pod \"6c750273-c3b9-46b0-b884-422d779e73e3\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.160432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqngl\" (UniqueName: \"kubernetes.io/projected/6c750273-c3b9-46b0-b884-422d779e73e3-kube-api-access-cqngl\") pod \"6c750273-c3b9-46b0-b884-422d779e73e3\" (UID: \"6c750273-c3b9-46b0-b884-422d779e73e3\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.171060 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c750273-c3b9-46b0-b884-422d779e73e3-kube-api-access-cqngl" (OuterVolumeSpecName: "kube-api-access-cqngl") pod "6c750273-c3b9-46b0-b884-422d779e73e3" (UID: "6c750273-c3b9-46b0-b884-422d779e73e3"). InnerVolumeSpecName "kube-api-access-cqngl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.186538 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6c750273-c3b9-46b0-b884-422d779e73e3" (UID: "6c750273-c3b9-46b0-b884-422d779e73e3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.244651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c750273-c3b9-46b0-b884-422d779e73e3" (UID: "6c750273-c3b9-46b0-b884-422d779e73e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.263738 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.264099 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.264227 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqngl\" (UniqueName: \"kubernetes.io/projected/6c750273-c3b9-46b0-b884-422d779e73e3-kube-api-access-cqngl\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.287874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-config" (OuterVolumeSpecName: "config") pod "6c750273-c3b9-46b0-b884-422d779e73e3" (UID: "6c750273-c3b9-46b0-b884-422d779e73e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.292463 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9826fd92-e55e-487f-ac6a-73a3e7f4d88a","Type":"ContainerStarted","Data":"e6b34b89d9f3386e105eaa008b2fdf727538243c5b560c5724c4b22ce722f0c2"} Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.298622 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.319467 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68c4f9dc76-n9c9p_6c750273-c3b9-46b0-b884-422d779e73e3/neutron-api/0.log" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.319668 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68c4f9dc76-n9c9p" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.320125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68c4f9dc76-n9c9p" event={"ID":"6c750273-c3b9-46b0-b884-422d779e73e3","Type":"ContainerDied","Data":"b0a02947986385cc6fa9a7507fe8e02a9d9d3897423881b4236f1cec7a4d13c5"} Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.320179 4796 scope.go:117] "RemoveContainer" containerID="5d5de606e92abc539d7bff830a649d030f37d161fe54b0398b062ad7dc9bea17" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.323039 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.063394526 podStartE2EDuration="15.323027901s" podCreationTimestamp="2025-12-12 04:54:07 +0000 UTC" firstStartedPulling="2025-12-12 04:54:08.425217696 +0000 UTC m=+1239.301234843" lastFinishedPulling="2025-12-12 04:54:21.684851071 +0000 UTC m=+1252.560868218" observedRunningTime="2025-12-12 04:54:22.315416443 +0000 UTC m=+1253.191433590" watchObservedRunningTime="2025-12-12 04:54:22.323027901 +0000 UTC m=+1253.199045048" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.340931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859caf6a-afe4-4ac1-b43e-f80ca4276b95","Type":"ContainerDied","Data":"f92ce8bdcdcc126f2988b3dc136f1d65c4779134a6080a934d706808dee8d7a5"} Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.341189 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367212 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-combined-ca-bundle\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367370 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-scripts\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367402 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-sg-core-conf-yaml\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367487 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-run-httpd\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367536 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-log-httpd\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-config-data\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.367665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m58hb\" (UniqueName: \"kubernetes.io/projected/859caf6a-afe4-4ac1-b43e-f80ca4276b95-kube-api-access-m58hb\") pod \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\" (UID: \"859caf6a-afe4-4ac1-b43e-f80ca4276b95\") " Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.368098 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.369705 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.371187 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.379315 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859caf6a-afe4-4ac1-b43e-f80ca4276b95-kube-api-access-m58hb" (OuterVolumeSpecName: "kube-api-access-m58hb") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "kube-api-access-m58hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.381962 4796 scope.go:117] "RemoveContainer" containerID="fb9efb26da5936ab195fd51951ea07162000999ef7bcec6abb416a860a74b1fa" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.387987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-scripts" (OuterVolumeSpecName: "scripts") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.402813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6c750273-c3b9-46b0-b884-422d779e73e3" (UID: "6c750273-c3b9-46b0-b884-422d779e73e3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.426762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.428792 4796 scope.go:117] "RemoveContainer" containerID="354d3f52be45c9c8569c7fee4a9b7375b9480086224da440f53633e77ff0c62a" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.452315 4796 scope.go:117] "RemoveContainer" containerID="8e3b10dff5228c2140191388cc92477c908df8cbfbfa54dacc7a9ec9f9f49254" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.470773 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.470798 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.470807 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.470815 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859caf6a-afe4-4ac1-b43e-f80ca4276b95-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.470823 4796 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c750273-c3b9-46b0-b884-422d779e73e3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.470833 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m58hb\" (UniqueName: \"kubernetes.io/projected/859caf6a-afe4-4ac1-b43e-f80ca4276b95-kube-api-access-m58hb\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.479936 4796 scope.go:117] "RemoveContainer" containerID="12a13cd636a9c2e58197849fee84f6fd00f7b7a6b23ada01c889f7f9bf29ede0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.521704 4796 scope.go:117] "RemoveContainer" containerID="73ac616a8af17ce62ab235447b7b7f700b516037dead886571ca084a5d825916" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.538419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.545315 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-config-data" (OuterVolumeSpecName: "config-data") pod "859caf6a-afe4-4ac1-b43e-f80ca4276b95" (UID: "859caf6a-afe4-4ac1-b43e-f80ca4276b95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.572773 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.572820 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859caf6a-afe4-4ac1-b43e-f80ca4276b95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.672041 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58987c9f79-c2xlb"] Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.727187 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68c4f9dc76-n9c9p"] Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.763180 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68c4f9dc76-n9c9p"] Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.866075 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.873474 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.893700 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:22 crc kubenswrapper[4796]: E1212 04:54:22.894179 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-api" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894198 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-api" Dec 12 04:54:22 crc kubenswrapper[4796]: E1212 04:54:22.894207 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-central-agent" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894214 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-central-agent" Dec 12 04:54:22 crc kubenswrapper[4796]: E1212 04:54:22.894228 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-notification-agent" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894235 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-notification-agent" Dec 12 04:54:22 crc kubenswrapper[4796]: E1212 04:54:22.894244 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="sg-core" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894250 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="sg-core" Dec 12 04:54:22 crc kubenswrapper[4796]: E1212 04:54:22.894267 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-httpd" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894272 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-httpd" Dec 12 04:54:22 crc kubenswrapper[4796]: E1212 04:54:22.894314 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="proxy-httpd" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894321 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="proxy-httpd" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894498 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="proxy-httpd" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894514 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-api" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894523 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="sg-core" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894539 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" containerName="neutron-httpd" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894547 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-notification-agent" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.894561 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" containerName="ceilometer-central-agent" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.896537 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.899298 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.903252 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.930346 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978389 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-log-httpd\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978485 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-config-data\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978525 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-run-httpd\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978574 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978623 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-scripts\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:22 crc kubenswrapper[4796]: I1212 04:54:22.978681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8x2q\" (UniqueName: \"kubernetes.io/projected/6e1f763b-3592-490e-b882-91e8d702c3a0-kube-api-access-j8x2q\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-config-data\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080088 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-run-httpd\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080134 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080178 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-scripts\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8x2q\" (UniqueName: \"kubernetes.io/projected/6e1f763b-3592-490e-b882-91e8d702c3a0-kube-api-access-j8x2q\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-log-httpd\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080292 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.080611 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-run-httpd\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.081373 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-log-httpd\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.094160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-scripts\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.096806 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8x2q\" (UniqueName: \"kubernetes.io/projected/6e1f763b-3592-490e-b882-91e8d702c3a0-kube-api-access-j8x2q\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.097520 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.108398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-config-data\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.112524 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.244431 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.383659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58987c9f79-c2xlb" event={"ID":"80ea0a4a-0715-4d5b-be0c-e11f00e6d743","Type":"ContainerStarted","Data":"772c84d42a714dd3a72ac7284441b9b9b8f379cb4c4ce938664beff56993fc11"} Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.383965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58987c9f79-c2xlb" event={"ID":"80ea0a4a-0715-4d5b-be0c-e11f00e6d743","Type":"ContainerStarted","Data":"d4ac841709a23ac2aab580da95daff0d4ba4f3d1ea949d62903e2d94739e4bb0"} Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.383976 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58987c9f79-c2xlb" event={"ID":"80ea0a4a-0715-4d5b-be0c-e11f00e6d743","Type":"ContainerStarted","Data":"a8e01e268ac4aa7ed4f24f2a590508fd8a98b0dd31a0cbc72011672f2fe885e2"} Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.384349 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.384397 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.391812 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a4988-59e7-443a-bbf1-31cd16abdcd6","Type":"ContainerStarted","Data":"feea146de5a83c1261e621a0aac08ad406be8732d6c47e003896bd8801e024ec"} Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.435947 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58987c9f79-c2xlb" podStartSLOduration=9.435924441 podStartE2EDuration="9.435924441s" podCreationTimestamp="2025-12-12 04:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:23.416248926 +0000 UTC m=+1254.292266073" watchObservedRunningTime="2025-12-12 04:54:23.435924441 +0000 UTC m=+1254.311941588" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.449380 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.449364631 podStartE2EDuration="11.449364631s" podCreationTimestamp="2025-12-12 04:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:23.438992917 +0000 UTC m=+1254.315010074" watchObservedRunningTime="2025-12-12 04:54:23.449364631 +0000 UTC m=+1254.325381778" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.490222 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c750273-c3b9-46b0-b884-422d779e73e3" path="/var/lib/kubelet/pods/6c750273-c3b9-46b0-b884-422d779e73e3/volumes" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.514979 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859caf6a-afe4-4ac1-b43e-f80ca4276b95" path="/var/lib/kubelet/pods/859caf6a-afe4-4ac1-b43e-f80ca4276b95/volumes" Dec 12 04:54:23 crc kubenswrapper[4796]: I1212 04:54:23.657493 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:24 crc kubenswrapper[4796]: I1212 04:54:24.057285 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:24 crc kubenswrapper[4796]: I1212 04:54:24.258780 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:54:24 crc kubenswrapper[4796]: I1212 04:54:24.259990 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-log" containerID="cri-o://ef32e79269831a71285933d9c85cdf8359cd004801eabddbb14c41797e90f0be" gracePeriod=30 Dec 12 04:54:24 crc kubenswrapper[4796]: I1212 04:54:24.260565 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-httpd" containerID="cri-o://741fe1ca77a568f4afe310310736db014c37377d494eed0ccb414e01fe71b5f8" gracePeriod=30 Dec 12 04:54:24 crc kubenswrapper[4796]: I1212 04:54:24.427196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerStarted","Data":"4a06084120cf8bf8b4d123b4660da9e026746cf8211de693f22e7a896c4191ff"} Dec 12 04:54:25 crc kubenswrapper[4796]: I1212 04:54:25.438889 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerStarted","Data":"fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd"} Dec 12 04:54:25 crc kubenswrapper[4796]: I1212 04:54:25.441287 4796 generic.go:334] "Generic (PLEG): container finished" podID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerID="ef32e79269831a71285933d9c85cdf8359cd004801eabddbb14c41797e90f0be" exitCode=143 Dec 12 04:54:25 crc kubenswrapper[4796]: I1212 04:54:25.441352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d70c6209-42c5-47d3-9d1a-156d5c7a6317","Type":"ContainerDied","Data":"ef32e79269831a71285933d9c85cdf8359cd004801eabddbb14c41797e90f0be"} Dec 12 04:54:26 crc kubenswrapper[4796]: I1212 04:54:26.369463 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:54:26 crc kubenswrapper[4796]: I1212 04:54:26.370003 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-log" containerID="cri-o://95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6" gracePeriod=30 Dec 12 04:54:26 crc kubenswrapper[4796]: I1212 04:54:26.370136 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-httpd" containerID="cri-o://3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380" gracePeriod=30 Dec 12 04:54:26 crc kubenswrapper[4796]: I1212 04:54:26.470541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerStarted","Data":"da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.484328 4796 generic.go:334] "Generic (PLEG): container finished" podID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerID="70b9c9eddbf4a440dcf231af081331ddd22ee3f9a6479629ac84e9ef933ac6f0" exitCode=137 Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.485035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerDied","Data":"70b9c9eddbf4a440dcf231af081331ddd22ee3f9a6479629ac84e9ef933ac6f0"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.485068 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerStarted","Data":"b50dcc280f6394fbab162fdb44d787620aad63c8ea6483a45866f68fc3afb35a"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.487680 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerStarted","Data":"57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.490726 4796 generic.go:334] "Generic (PLEG): container finished" podID="7913672c-384c-472c-89a8-0d546f345a28" containerID="7ee4b76a2712ab615b271101c7888ecca69c8d06360d3dab11046c4fb8bfb928" exitCode=137 Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.490776 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerDied","Data":"7ee4b76a2712ab615b271101c7888ecca69c8d06360d3dab11046c4fb8bfb928"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.490794 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerStarted","Data":"d9a41c51a02fbfc7b83df3206185605b2f04324f46c56506c9aab25a48af1d31"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.494040 4796 generic.go:334] "Generic (PLEG): container finished" podID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerID="95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6" exitCode=143 Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.494091 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a93c9e56-c4a9-41c8-a519-4193af0d7cb8","Type":"ContainerDied","Data":"95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6"} Dec 12 04:54:27 crc kubenswrapper[4796]: I1212 04:54:27.808535 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.194368 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.508388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerStarted","Data":"eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69"} Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.508464 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.508494 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-central-agent" containerID="cri-o://fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd" gracePeriod=30 Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.508585 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="proxy-httpd" containerID="cri-o://eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69" gracePeriod=30 Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.508618 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="sg-core" containerID="cri-o://57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32" gracePeriod=30 Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.508667 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-notification-agent" containerID="cri-o://da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22" gracePeriod=30 Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.521018 4796 generic.go:334] "Generic (PLEG): container finished" podID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerID="741fe1ca77a568f4afe310310736db014c37377d494eed0ccb414e01fe71b5f8" exitCode=0 Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.521350 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d70c6209-42c5-47d3-9d1a-156d5c7a6317","Type":"ContainerDied","Data":"741fe1ca77a568f4afe310310736db014c37377d494eed0ccb414e01fe71b5f8"} Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.521397 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d70c6209-42c5-47d3-9d1a-156d5c7a6317","Type":"ContainerDied","Data":"b8956b6c61d94b1e984282324d1c6ccf2527a2b3883936db18bd0a9bc4860cc4"} Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.521410 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8956b6c61d94b1e984282324d1c6ccf2527a2b3883936db18bd0a9bc4860cc4" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.536975 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.573369669 podStartE2EDuration="6.536958973s" podCreationTimestamp="2025-12-12 04:54:22 +0000 UTC" firstStartedPulling="2025-12-12 04:54:23.668211653 +0000 UTC m=+1254.544228800" lastFinishedPulling="2025-12-12 04:54:27.631800957 +0000 UTC m=+1258.507818104" observedRunningTime="2025-12-12 04:54:28.53334945 +0000 UTC m=+1259.409366597" watchObservedRunningTime="2025-12-12 04:54:28.536958973 +0000 UTC m=+1259.412976120" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.549214 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-logs\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565541 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c58v9\" (UniqueName: \"kubernetes.io/projected/d70c6209-42c5-47d3-9d1a-156d5c7a6317-kube-api-access-c58v9\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565589 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-internal-tls-certs\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565611 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-scripts\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565681 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-combined-ca-bundle\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-config-data\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.565723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-httpd-run\") pod \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\" (UID: \"d70c6209-42c5-47d3-9d1a-156d5c7a6317\") " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.572043 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.572762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-logs" (OuterVolumeSpecName: "logs") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.574474 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.585638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-scripts" (OuterVolumeSpecName: "scripts") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.594961 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70c6209-42c5-47d3-9d1a-156d5c7a6317-kube-api-access-c58v9" (OuterVolumeSpecName: "kube-api-access-c58v9") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "kube-api-access-c58v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.640488 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.669562 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.669608 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70c6209-42c5-47d3-9d1a-156d5c7a6317-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.669621 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c58v9\" (UniqueName: \"kubernetes.io/projected/d70c6209-42c5-47d3-9d1a-156d5c7a6317-kube-api-access-c58v9\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.669649 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.669663 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.669676 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.701385 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-config-data" (OuterVolumeSpecName: "config-data") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.714949 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d70c6209-42c5-47d3-9d1a-156d5c7a6317" (UID: "d70c6209-42c5-47d3-9d1a-156d5c7a6317"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.764084 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.771115 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.771153 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:28 crc kubenswrapper[4796]: I1212 04:54:28.771165 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70c6209-42c5-47d3-9d1a-156d5c7a6317-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.535164 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerID="eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69" exitCode=0 Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.535197 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerID="57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32" exitCode=2 Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.535209 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerID="da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22" exitCode=0 Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.535268 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.536082 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerDied","Data":"eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69"} Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.536110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerDied","Data":"57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32"} Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.536120 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerDied","Data":"da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22"} Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.571836 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.584258 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.591837 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:54:29 crc kubenswrapper[4796]: E1212 04:54:29.592196 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-log" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.592211 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-log" Dec 12 04:54:29 crc kubenswrapper[4796]: E1212 04:54:29.592242 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-httpd" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.592250 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-httpd" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.592423 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-httpd" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.592454 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" containerName="glance-log" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.593334 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.596398 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.596659 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.603518 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793409 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793507 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793559 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793575 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2std\" (UniqueName: \"kubernetes.io/projected/b88340e6-0adf-40e5-9e91-610c949cd71b-kube-api-access-v2std\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b88340e6-0adf-40e5-9e91-610c949cd71b-logs\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.793662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b88340e6-0adf-40e5-9e91-610c949cd71b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895499 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895519 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2std\" (UniqueName: \"kubernetes.io/projected/b88340e6-0adf-40e5-9e91-610c949cd71b-kube-api-access-v2std\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895553 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b88340e6-0adf-40e5-9e91-610c949cd71b-logs\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895605 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b88340e6-0adf-40e5-9e91-610c949cd71b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.895638 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.897019 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.897118 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b88340e6-0adf-40e5-9e91-610c949cd71b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.897208 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b88340e6-0adf-40e5-9e91-610c949cd71b-logs\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.906312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.906452 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.910027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.931442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88340e6-0adf-40e5-9e91-610c949cd71b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.941661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:29 crc kubenswrapper[4796]: I1212 04:54:29.948754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2std\" (UniqueName: \"kubernetes.io/projected/b88340e6-0adf-40e5-9e91-610c949cd71b-kube-api-access-v2std\") pod \"glance-default-internal-api-0\" (UID: \"b88340e6-0adf-40e5-9e91-610c949cd71b\") " pod="openstack/glance-default-internal-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.182618 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.216602 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.227117 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.236986 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58987c9f79-c2xlb" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302423 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-httpd-run\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302540 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-public-tls-certs\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302576 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302630 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-logs\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302657 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-scripts\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302715 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-config-data\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302744 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-combined-ca-bundle\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.302776 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4wn\" (UniqueName: \"kubernetes.io/projected/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-kube-api-access-6t4wn\") pod \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\" (UID: \"a93c9e56-c4a9-41c8-a519-4193af0d7cb8\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.308428 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.308672 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.309389 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-logs" (OuterVolumeSpecName: "logs") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.319339 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-scripts" (OuterVolumeSpecName: "scripts") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.319797 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-kube-api-access-6t4wn" (OuterVolumeSpecName: "kube-api-access-6t4wn") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "kube-api-access-6t4wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.335412 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.387946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-config-data" (OuterVolumeSpecName: "config-data") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404804 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404832 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404842 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404850 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404858 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404867 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t4wn\" (UniqueName: \"kubernetes.io/projected/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-kube-api-access-6t4wn\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.404876 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.422402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a93c9e56-c4a9-41c8-a519-4193af0d7cb8" (UID: "a93c9e56-c4a9-41c8-a519-4193af0d7cb8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.442844 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.510396 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.510433 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a93c9e56-c4a9-41c8-a519-4193af0d7cb8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.558474 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.602692 4796 generic.go:334] "Generic (PLEG): container finished" podID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerID="3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380" exitCode=0 Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.602749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a93c9e56-c4a9-41c8-a519-4193af0d7cb8","Type":"ContainerDied","Data":"3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380"} Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.602775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a93c9e56-c4a9-41c8-a519-4193af0d7cb8","Type":"ContainerDied","Data":"3a3c085190d5221a21a93e0f143f2ff0f9108e1c118d121a7bdd659d146ad8cd"} Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.602793 4796 scope.go:117] "RemoveContainer" containerID="3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.603222 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.651676 4796 generic.go:334] "Generic (PLEG): container finished" podID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerID="37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261" exitCode=137 Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.652168 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.652693 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e13c7a61-e620-4151-99ca-a552eff1e8d7","Type":"ContainerDied","Data":"37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261"} Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.652729 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e13c7a61-e620-4151-99ca-a552eff1e8d7","Type":"ContainerDied","Data":"9e591b2266eeb087ea5b42550dc61703148c6369cedc4eb13fe4a465bc46825a"} Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.673747 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.689739 4796 scope.go:117] "RemoveContainer" containerID="95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.707520 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724604 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czb2s\" (UniqueName: \"kubernetes.io/projected/e13c7a61-e620-4151-99ca-a552eff1e8d7-kube-api-access-czb2s\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724691 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e13c7a61-e620-4151-99ca-a552eff1e8d7-etc-machine-id\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-scripts\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724813 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-combined-ca-bundle\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724835 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724854 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data-custom\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.724884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13c7a61-e620-4151-99ca-a552eff1e8d7-logs\") pod \"e13c7a61-e620-4151-99ca-a552eff1e8d7\" (UID: \"e13c7a61-e620-4151-99ca-a552eff1e8d7\") " Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.731006 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:54:30 crc kubenswrapper[4796]: E1212 04:54:30.731975 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.731994 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api" Dec 12 04:54:30 crc kubenswrapper[4796]: E1212 04:54:30.732032 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-httpd" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732040 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-httpd" Dec 12 04:54:30 crc kubenswrapper[4796]: E1212 04:54:30.732050 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-log" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732058 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-log" Dec 12 04:54:30 crc kubenswrapper[4796]: E1212 04:54:30.732112 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api-log" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732124 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api-log" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732426 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732469 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-log" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732496 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" containerName="cinder-api-log" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.732505 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" containerName="glance-httpd" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.736140 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.738376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e13c7a61-e620-4151-99ca-a552eff1e8d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.740234 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13c7a61-e620-4151-99ca-a552eff1e8d7-logs" (OuterVolumeSpecName: "logs") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.751629 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.758156 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13c7a61-e620-4151-99ca-a552eff1e8d7-kube-api-access-czb2s" (OuterVolumeSpecName: "kube-api-access-czb2s") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "kube-api-access-czb2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.758239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-scripts" (OuterVolumeSpecName: "scripts") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.758621 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.758775 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.758926 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.835714 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.837534 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42j26\" (UniqueName: \"kubernetes.io/projected/4cd46cb5-ba6c-480f-a039-95a66caa648a-kube-api-access-42j26\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.837588 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.837608 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.837817 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cd46cb5-ba6c-480f-a039-95a66caa648a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.837930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838030 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cd46cb5-ba6c-480f-a039-95a66caa648a-logs\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838179 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czb2s\" (UniqueName: \"kubernetes.io/projected/e13c7a61-e620-4151-99ca-a552eff1e8d7-kube-api-access-czb2s\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838198 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e13c7a61-e620-4151-99ca-a552eff1e8d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838212 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838225 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.838236 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13c7a61-e620-4151-99ca-a552eff1e8d7-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.849177 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.878651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data" (OuterVolumeSpecName: "config-data") pod "e13c7a61-e620-4151-99ca-a552eff1e8d7" (UID: "e13c7a61-e620-4151-99ca-a552eff1e8d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.933183 4796 scope.go:117] "RemoveContainer" containerID="3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380" Dec 12 04:54:30 crc kubenswrapper[4796]: E1212 04:54:30.937465 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380\": container with ID starting with 3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380 not found: ID does not exist" containerID="3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.937527 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380"} err="failed to get container status \"3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380\": rpc error: code = NotFound desc = could not find container \"3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380\": container with ID starting with 3996a07a6bdea95b9d731a353c796b3766d64bce8f121a3d8c6a05f0d81e8380 not found: ID does not exist" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.937574 4796 scope.go:117] "RemoveContainer" containerID="95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6" Dec 12 04:54:30 crc kubenswrapper[4796]: E1212 04:54:30.937868 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6\": container with ID starting with 95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6 not found: ID does not exist" containerID="95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.937898 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6"} err="failed to get container status \"95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6\": rpc error: code = NotFound desc = could not find container \"95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6\": container with ID starting with 95bfe91e1b2913a2cf5092d9f4a31c7ff420f50e614c4c2e3ec05fdb481a1df6 not found: ID does not exist" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.937919 4796 scope.go:117] "RemoveContainer" containerID="37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939550 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cd46cb5-ba6c-480f-a039-95a66caa648a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939745 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cd46cb5-ba6c-480f-a039-95a66caa648a-logs\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42j26\" (UniqueName: \"kubernetes.io/projected/4cd46cb5-ba6c-480f-a039-95a66caa648a-kube-api-access-42j26\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939919 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.939964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.940150 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.940204 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13c7a61-e620-4151-99ca-a552eff1e8d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.940868 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.941140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cd46cb5-ba6c-480f-a039-95a66caa648a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.941237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cd46cb5-ba6c-480f-a039-95a66caa648a-logs\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.944059 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.945728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.946421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.954213 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cd46cb5-ba6c-480f-a039-95a66caa648a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.963835 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 12 04:54:30 crc kubenswrapper[4796]: W1212 04:54:30.976541 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb88340e6_0adf_40e5_9e91_610c949cd71b.slice/crio-9addd1f8194210fae6f5887d740c978fff2f8cf1186e5cb8eb46efee3e8c97ab WatchSource:0}: Error finding container 9addd1f8194210fae6f5887d740c978fff2f8cf1186e5cb8eb46efee3e8c97ab: Status 404 returned error can't find the container with id 9addd1f8194210fae6f5887d740c978fff2f8cf1186e5cb8eb46efee3e8c97ab Dec 12 04:54:30 crc kubenswrapper[4796]: I1212 04:54:30.976711 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42j26\" (UniqueName: \"kubernetes.io/projected/4cd46cb5-ba6c-480f-a039-95a66caa648a-kube-api-access-42j26\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.018182 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.037939 4796 scope.go:117] "RemoveContainer" containerID="79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.044121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4cd46cb5-ba6c-480f-a039-95a66caa648a\") " pod="openstack/glance-default-external-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.044563 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.182663 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.193606 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.200096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.201099 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.205010 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.225075 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.229811 4796 scope.go:117] "RemoveContainer" containerID="37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261" Dec 12 04:54:31 crc kubenswrapper[4796]: E1212 04:54:31.238199 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261\": container with ID starting with 37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261 not found: ID does not exist" containerID="37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.248471 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261"} err="failed to get container status \"37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261\": rpc error: code = NotFound desc = could not find container \"37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261\": container with ID starting with 37d6a3577e2a83502631ad043da7c903031f42f269da4d9be41fa9f3d0766261 not found: ID does not exist" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.248512 4796 scope.go:117] "RemoveContainer" containerID="79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.245253 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-config-data\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.248804 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.248851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.248877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.249052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfzk\" (UniqueName: \"kubernetes.io/projected/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-kube-api-access-9zfzk\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.249079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-scripts\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.249141 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.249178 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-logs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.249197 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: E1212 04:54:31.254010 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2\": container with ID starting with 79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2 not found: ID does not exist" containerID="79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.254128 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2"} err="failed to get container status \"79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2\": rpc error: code = NotFound desc = could not find container \"79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2\": container with ID starting with 79ae34a957a9e0987c4568ce68831db29b13f77d84d7e2dfcbae15c302c350b2 not found: ID does not exist" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.299605 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352007 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfzk\" (UniqueName: \"kubernetes.io/projected/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-kube-api-access-9zfzk\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-scripts\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352101 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-logs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-config-data\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352414 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352437 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.352453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.355865 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-logs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.358777 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.361987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-scripts\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.368066 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.368781 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-config-data\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.369375 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.369703 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.372232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.378666 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfzk\" (UniqueName: \"kubernetes.io/projected/b2d0ca4f-8c51-492b-ae06-3d09ecdc4934-kube-api-access-9zfzk\") pod \"cinder-api-0\" (UID: \"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934\") " pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.431315 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93c9e56-c4a9-41c8-a519-4193af0d7cb8" path="/var/lib/kubelet/pods/a93c9e56-c4a9-41c8-a519-4193af0d7cb8/volumes" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.432211 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70c6209-42c5-47d3-9d1a-156d5c7a6317" path="/var/lib/kubelet/pods/d70c6209-42c5-47d3-9d1a-156d5c7a6317/volumes" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.433078 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13c7a61-e620-4151-99ca-a552eff1e8d7" path="/var/lib/kubelet/pods/e13c7a61-e620-4151-99ca-a552eff1e8d7/volumes" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.532601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 12 04:54:31 crc kubenswrapper[4796]: I1212 04:54:31.674006 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b88340e6-0adf-40e5-9e91-610c949cd71b","Type":"ContainerStarted","Data":"9addd1f8194210fae6f5887d740c978fff2f8cf1186e5cb8eb46efee3e8c97ab"} Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.044967 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.167455 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 12 04:54:32 crc kubenswrapper[4796]: W1212 04:54:32.197506 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2d0ca4f_8c51_492b_ae06_3d09ecdc4934.slice/crio-8f56ce12f4549b29f4bb5e6cab2985381c540b3f3dc0dc89eab47173267e7989 WatchSource:0}: Error finding container 8f56ce12f4549b29f4bb5e6cab2985381c540b3f3dc0dc89eab47173267e7989: Status 404 returned error can't find the container with id 8f56ce12f4549b29f4bb5e6cab2985381c540b3f3dc0dc89eab47173267e7989 Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.745914 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934","Type":"ContainerStarted","Data":"8f56ce12f4549b29f4bb5e6cab2985381c540b3f3dc0dc89eab47173267e7989"} Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.759549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b88340e6-0adf-40e5-9e91-610c949cd71b","Type":"ContainerStarted","Data":"115027233ac9f55f1f513e2f80ee9385dfb33164a4ecaa439dd95f4f827942af"} Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.775832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cd46cb5-ba6c-480f-a039-95a66caa648a","Type":"ContainerStarted","Data":"3dffce240484f350030e2f2281a16535493c5e3a9e1fcbf41bef7f41826091fb"} Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.976207 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.976262 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.976335 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.977037 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c972db73eaab2458f98bcc92148f56e7f3d05de16f8aaa63f617c41f460205f5"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:54:32 crc kubenswrapper[4796]: I1212 04:54:32.977092 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://c972db73eaab2458f98bcc92148f56e7f3d05de16f8aaa63f617c41f460205f5" gracePeriod=600 Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.792221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b88340e6-0adf-40e5-9e91-610c949cd71b","Type":"ContainerStarted","Data":"7ab59892ce4348cfcf3d07b165e9e569f0cd63c1fd378d7de48c015b1db1c9f5"} Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.799077 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cd46cb5-ba6c-480f-a039-95a66caa648a","Type":"ContainerStarted","Data":"18b74d5086631ce3f836a362fa264298ea7c5d0c3590de61933543278d9320e6"} Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.805645 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="c972db73eaab2458f98bcc92148f56e7f3d05de16f8aaa63f617c41f460205f5" exitCode=0 Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.805953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"c972db73eaab2458f98bcc92148f56e7f3d05de16f8aaa63f617c41f460205f5"} Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.806008 4796 scope.go:117] "RemoveContainer" containerID="1733a30215adfd71b24cb88a4cee9d965e3cb0a10cc8f3339202f4fa5f80086c" Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.831936 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.831918707 podStartE2EDuration="4.831918707s" podCreationTimestamp="2025-12-12 04:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:33.826848748 +0000 UTC m=+1264.702865895" watchObservedRunningTime="2025-12-12 04:54:33.831918707 +0000 UTC m=+1264.707935854" Dec 12 04:54:33 crc kubenswrapper[4796]: I1212 04:54:33.849949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934","Type":"ContainerStarted","Data":"850c19c807e02724875c06f24732d14abf2ac78f0e9f59068c187018789c9d7a"} Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.595928 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.757739 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8x2q\" (UniqueName: \"kubernetes.io/projected/6e1f763b-3592-490e-b882-91e8d702c3a0-kube-api-access-j8x2q\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.758020 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-combined-ca-bundle\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.758123 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-sg-core-conf-yaml\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.758172 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-run-httpd\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.758261 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-scripts\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.758322 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-log-httpd\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.758355 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-config-data\") pod \"6e1f763b-3592-490e-b882-91e8d702c3a0\" (UID: \"6e1f763b-3592-490e-b882-91e8d702c3a0\") " Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.761582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.761937 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.767447 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-scripts" (OuterVolumeSpecName: "scripts") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.768380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1f763b-3592-490e-b882-91e8d702c3a0-kube-api-access-j8x2q" (OuterVolumeSpecName: "kube-api-access-j8x2q") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "kube-api-access-j8x2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.799397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.863177 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.863201 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.863220 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.863230 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e1f763b-3592-490e-b882-91e8d702c3a0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.863238 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8x2q\" (UniqueName: \"kubernetes.io/projected/6e1f763b-3592-490e-b882-91e8d702c3a0-kube-api-access-j8x2q\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.868267 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cd46cb5-ba6c-480f-a039-95a66caa648a","Type":"ContainerStarted","Data":"77c31c9fd08242364854cbf9794d9684844cbb599da0b76e3eec31af494a05cb"} Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.886365 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerID="fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd" exitCode=0 Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.886455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerDied","Data":"fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd"} Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.886488 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e1f763b-3592-490e-b882-91e8d702c3a0","Type":"ContainerDied","Data":"4a06084120cf8bf8b4d123b4660da9e026746cf8211de693f22e7a896c4191ff"} Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.886508 4796 scope.go:117] "RemoveContainer" containerID="eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.886650 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.913953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"ffda408d796b66de9636479ae49cc06325aa5f1abbab5ccb1554a19b15d504a1"} Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.924492 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.926703 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2d0ca4f-8c51-492b-ae06-3d09ecdc4934","Type":"ContainerStarted","Data":"a374a60c2997db87c20ac3e86bf89fdf357cecc27033889dd5b8e8dd08700abd"} Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.926746 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 12 04:54:34 crc kubenswrapper[4796]: I1212 04:54:34.950781 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.950753903 podStartE2EDuration="4.950753903s" podCreationTimestamp="2025-12-12 04:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:34.903170525 +0000 UTC m=+1265.779187672" watchObservedRunningTime="2025-12-12 04:54:34.950753903 +0000 UTC m=+1265.826771090" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:34.994797 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.017704 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.017682765 podStartE2EDuration="5.017682765s" podCreationTimestamp="2025-12-12 04:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:34.966882987 +0000 UTC m=+1265.842900134" watchObservedRunningTime="2025-12-12 04:54:35.017682765 +0000 UTC m=+1265.893699912" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.024563 4796 scope.go:117] "RemoveContainer" containerID="57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.069781 4796 scope.go:117] "RemoveContainer" containerID="da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.092094 4796 scope.go:117] "RemoveContainer" containerID="fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.096544 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-config-data" (OuterVolumeSpecName: "config-data") pod "6e1f763b-3592-490e-b882-91e8d702c3a0" (UID: "6e1f763b-3592-490e-b882-91e8d702c3a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.112424 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1f763b-3592-490e-b882-91e8d702c3a0-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.114953 4796 scope.go:117] "RemoveContainer" containerID="eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.117397 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69\": container with ID starting with eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69 not found: ID does not exist" containerID="eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.117433 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69"} err="failed to get container status \"eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69\": rpc error: code = NotFound desc = could not find container \"eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69\": container with ID starting with eec75a3d7fac18c5d320f84d402ebafa0b9186f7073cc9b7cfbf8ab490b2bb69 not found: ID does not exist" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.117456 4796 scope.go:117] "RemoveContainer" containerID="57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.118401 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32\": container with ID starting with 57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32 not found: ID does not exist" containerID="57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.118431 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32"} err="failed to get container status \"57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32\": rpc error: code = NotFound desc = could not find container \"57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32\": container with ID starting with 57044fb164929dd367f797a429242d3be75be866cce2782bbef73a0802e9bf32 not found: ID does not exist" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.118448 4796 scope.go:117] "RemoveContainer" containerID="da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.118690 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22\": container with ID starting with da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22 not found: ID does not exist" containerID="da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.118784 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22"} err="failed to get container status \"da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22\": rpc error: code = NotFound desc = could not find container \"da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22\": container with ID starting with da6ebb48eeb1c09123365c56a990a1681db5f7b5848f4516ad891740863f4d22 not found: ID does not exist" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.118863 4796 scope.go:117] "RemoveContainer" containerID="fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.119138 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd\": container with ID starting with fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd not found: ID does not exist" containerID="fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.119249 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd"} err="failed to get container status \"fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd\": rpc error: code = NotFound desc = could not find container \"fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd\": container with ID starting with fde782b0ac6ead432894fd6f5499bef39d830e8f56a5bfeddba0a90b736db5bd not found: ID does not exist" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.229526 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.240149 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.259113 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.259624 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="proxy-httpd" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.259650 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="proxy-httpd" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.259673 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-central-agent" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.259682 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-central-agent" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.259752 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-notification-agent" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.259761 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-notification-agent" Dec 12 04:54:35 crc kubenswrapper[4796]: E1212 04:54:35.259776 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="sg-core" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.259784 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="sg-core" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.259974 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="proxy-httpd" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.260002 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="sg-core" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.260011 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-central-agent" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.260023 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" containerName="ceilometer-notification-agent" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.263779 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.265774 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.265928 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.288155 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-scripts\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419539 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-config-data\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhw8l\" (UniqueName: \"kubernetes.io/projected/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-kube-api-access-hhw8l\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.419920 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.422761 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1f763b-3592-490e-b882-91e8d702c3a0" path="/var/lib/kubelet/pods/6e1f763b-3592-490e-b882-91e8d702c3a0/volumes" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521277 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-scripts\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-config-data\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521486 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhw8l\" (UniqueName: \"kubernetes.io/projected/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-kube-api-access-hhw8l\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521599 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.521623 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.522421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.522584 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.525806 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-scripts\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.527321 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-config-data\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.527414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.529964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.541336 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhw8l\" (UniqueName: \"kubernetes.io/projected/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-kube-api-access-hhw8l\") pod \"ceilometer-0\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " pod="openstack/ceilometer-0" Dec 12 04:54:35 crc kubenswrapper[4796]: I1212 04:54:35.581693 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.097560 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.417639 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sjlzk"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.422668 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.428596 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sjlzk"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.548960 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bpw4k"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.550322 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.557339 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26991dea-36e1-4038-9a9f-112a834e81dd-operator-scripts\") pod \"nova-api-db-create-sjlzk\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.557384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gld8\" (UniqueName: \"kubernetes.io/projected/26991dea-36e1-4038-9a9f-112a834e81dd-kube-api-access-9gld8\") pod \"nova-api-db-create-sjlzk\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.569812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bpw4k"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.642756 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9ffa-account-create-update-sbpcq"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.643786 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.645974 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.656381 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9ffa-account-create-update-sbpcq"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.661067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26991dea-36e1-4038-9a9f-112a834e81dd-operator-scripts\") pod \"nova-api-db-create-sjlzk\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.661098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gld8\" (UniqueName: \"kubernetes.io/projected/26991dea-36e1-4038-9a9f-112a834e81dd-kube-api-access-9gld8\") pod \"nova-api-db-create-sjlzk\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.661163 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-operator-scripts\") pod \"nova-cell0-db-create-bpw4k\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.661232 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfvk\" (UniqueName: \"kubernetes.io/projected/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-kube-api-access-thfvk\") pod \"nova-cell0-db-create-bpw4k\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.661711 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26991dea-36e1-4038-9a9f-112a834e81dd-operator-scripts\") pod \"nova-api-db-create-sjlzk\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.702338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gld8\" (UniqueName: \"kubernetes.io/projected/26991dea-36e1-4038-9a9f-112a834e81dd-kube-api-access-9gld8\") pod \"nova-api-db-create-sjlzk\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.750255 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nkhcz"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.751620 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.762949 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-operator-scripts\") pod \"nova-api-9ffa-account-create-update-sbpcq\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.763075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thfvk\" (UniqueName: \"kubernetes.io/projected/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-kube-api-access-thfvk\") pod \"nova-cell0-db-create-bpw4k\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.763133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlmg\" (UniqueName: \"kubernetes.io/projected/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-kube-api-access-7hlmg\") pod \"nova-api-9ffa-account-create-update-sbpcq\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.763304 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-operator-scripts\") pod \"nova-cell0-db-create-bpw4k\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.764076 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-operator-scripts\") pod \"nova-cell0-db-create-bpw4k\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.772555 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nkhcz"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.772815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.805114 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfvk\" (UniqueName: \"kubernetes.io/projected/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-kube-api-access-thfvk\") pod \"nova-cell0-db-create-bpw4k\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.869164 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3c59-account-create-update-x8hfx"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.869350 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.870218 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.871607 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.872512 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8958fdf9-254c-4b91-aec6-b49c39d63856-operator-scripts\") pod \"nova-cell1-db-create-nkhcz\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.872579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtr7\" (UniqueName: \"kubernetes.io/projected/8958fdf9-254c-4b91-aec6-b49c39d63856-kube-api-access-hjtr7\") pod \"nova-cell1-db-create-nkhcz\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.872632 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-operator-scripts\") pod \"nova-api-9ffa-account-create-update-sbpcq\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.872732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlmg\" (UniqueName: \"kubernetes.io/projected/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-kube-api-access-7hlmg\") pod \"nova-api-9ffa-account-create-update-sbpcq\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.873527 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-operator-scripts\") pod \"nova-api-9ffa-account-create-update-sbpcq\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.880375 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c59-account-create-update-x8hfx"] Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.925612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlmg\" (UniqueName: \"kubernetes.io/projected/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-kube-api-access-7hlmg\") pod \"nova-api-9ffa-account-create-update-sbpcq\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.958025 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.977302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8958fdf9-254c-4b91-aec6-b49c39d63856-operator-scripts\") pod \"nova-cell1-db-create-nkhcz\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.977378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtr7\" (UniqueName: \"kubernetes.io/projected/8958fdf9-254c-4b91-aec6-b49c39d63856-kube-api-access-hjtr7\") pod \"nova-cell1-db-create-nkhcz\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.984388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8958fdf9-254c-4b91-aec6-b49c39d63856-operator-scripts\") pod \"nova-cell1-db-create-nkhcz\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.984495 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngvj\" (UniqueName: \"kubernetes.io/projected/58a4043e-72a9-434f-9a65-f64d0e6846df-kube-api-access-nngvj\") pod \"nova-cell0-3c59-account-create-update-x8hfx\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:36 crc kubenswrapper[4796]: I1212 04:54:36.984645 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a4043e-72a9-434f-9a65-f64d0e6846df-operator-scripts\") pod \"nova-cell0-3c59-account-create-update-x8hfx\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.018653 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.033030 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.040332 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.095224 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.097629 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngvj\" (UniqueName: \"kubernetes.io/projected/58a4043e-72a9-434f-9a65-f64d0e6846df-kube-api-access-nngvj\") pod \"nova-cell0-3c59-account-create-update-x8hfx\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.097708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a4043e-72a9-434f-9a65-f64d0e6846df-operator-scripts\") pod \"nova-cell0-3c59-account-create-update-x8hfx\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.099555 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.099829 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.099949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a4043e-72a9-434f-9a65-f64d0e6846df-operator-scripts\") pod \"nova-cell0-3c59-account-create-update-x8hfx\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.111154 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9a93-account-create-update-6wjp2"] Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.113028 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.125092 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtr7\" (UniqueName: \"kubernetes.io/projected/8958fdf9-254c-4b91-aec6-b49c39d63856-kube-api-access-hjtr7\") pod \"nova-cell1-db-create-nkhcz\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.125260 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.144804 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9a93-account-create-update-6wjp2"] Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.184785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngvj\" (UniqueName: \"kubernetes.io/projected/58a4043e-72a9-434f-9a65-f64d0e6846df-kube-api-access-nngvj\") pod \"nova-cell0-3c59-account-create-update-x8hfx\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.200530 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerStarted","Data":"ff1f7984c005a3d077c715c4c95110c75f37dcce3f397c6a768896dcce57a13c"} Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.201579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5503fc1-7260-47f1-ac1c-97650aacd9d8-operator-scripts\") pod \"nova-cell1-9a93-account-create-update-6wjp2\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.202548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5gd\" (UniqueName: \"kubernetes.io/projected/b5503fc1-7260-47f1-ac1c-97650aacd9d8-kube-api-access-zb5gd\") pod \"nova-cell1-9a93-account-create-update-6wjp2\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.309786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5503fc1-7260-47f1-ac1c-97650aacd9d8-operator-scripts\") pod \"nova-cell1-9a93-account-create-update-6wjp2\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.309872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb5gd\" (UniqueName: \"kubernetes.io/projected/b5503fc1-7260-47f1-ac1c-97650aacd9d8-kube-api-access-zb5gd\") pod \"nova-cell1-9a93-account-create-update-6wjp2\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.310673 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5503fc1-7260-47f1-ac1c-97650aacd9d8-operator-scripts\") pod \"nova-cell1-9a93-account-create-update-6wjp2\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.330602 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb5gd\" (UniqueName: \"kubernetes.io/projected/b5503fc1-7260-47f1-ac1c-97650aacd9d8-kube-api-access-zb5gd\") pod \"nova-cell1-9a93-account-create-update-6wjp2\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.359859 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.400677 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.466587 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sjlzk"] Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.474644 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.642852 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bpw4k"] Dec 12 04:54:37 crc kubenswrapper[4796]: W1212 04:54:37.665604 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd1f195_1eb4_41ed_a548_4f3e7e0d5090.slice/crio-d8eba465bd8e48adef7591c53ad74803fa9313ca3c7b14177fab173a21d9bedc WatchSource:0}: Error finding container d8eba465bd8e48adef7591c53ad74803fa9313ca3c7b14177fab173a21d9bedc: Status 404 returned error can't find the container with id d8eba465bd8e48adef7591c53ad74803fa9313ca3c7b14177fab173a21d9bedc Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.763715 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9ffa-account-create-update-sbpcq"] Dec 12 04:54:37 crc kubenswrapper[4796]: I1212 04:54:37.986556 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3c59-account-create-update-x8hfx"] Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.115866 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nkhcz"] Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.245119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nkhcz" event={"ID":"8958fdf9-254c-4b91-aec6-b49c39d63856","Type":"ContainerStarted","Data":"256493ddf7981f9d791b707055b063f4e4ab7c76750dee6894896ff399b2e98d"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.266417 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" event={"ID":"58a4043e-72a9-434f-9a65-f64d0e6846df","Type":"ContainerStarted","Data":"650213f560c88f0cbfcc81dcd287e2634990401de74db4aa1cd1d293b5fb74de"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.273487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" event={"ID":"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e","Type":"ContainerStarted","Data":"44978376a828b6bc5ffc6982d150341b9bb8701e3c1e5cf9bb3e88fd95f18379"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.282905 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerStarted","Data":"0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.285733 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sjlzk" event={"ID":"26991dea-36e1-4038-9a9f-112a834e81dd","Type":"ContainerStarted","Data":"66f3a8b6a51e92537bc5d716e1396da95f22744e5dee0be5717c71f3c143ea64"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.298788 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bpw4k" event={"ID":"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090","Type":"ContainerStarted","Data":"184e71120c10bc2b9fdd3653138d949cc7fb487dd5925bc8228a0e4838ac6020"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.298836 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bpw4k" event={"ID":"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090","Type":"ContainerStarted","Data":"d8eba465bd8e48adef7591c53ad74803fa9313ca3c7b14177fab173a21d9bedc"} Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.322026 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-sjlzk" podStartSLOduration=2.32201086 podStartE2EDuration="2.32201086s" podCreationTimestamp="2025-12-12 04:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:38.306825025 +0000 UTC m=+1269.182842182" watchObservedRunningTime="2025-12-12 04:54:38.32201086 +0000 UTC m=+1269.198028007" Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.348249 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9a93-account-create-update-6wjp2"] Dec 12 04:54:38 crc kubenswrapper[4796]: I1212 04:54:38.359322 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-bpw4k" podStartSLOduration=2.359302526 podStartE2EDuration="2.359302526s" podCreationTimestamp="2025-12-12 04:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:54:38.335891654 +0000 UTC m=+1269.211908801" watchObservedRunningTime="2025-12-12 04:54:38.359302526 +0000 UTC m=+1269.235319673" Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.338206 4796 generic.go:334] "Generic (PLEG): container finished" podID="8958fdf9-254c-4b91-aec6-b49c39d63856" containerID="682e6fd791d4a8ebd8af706cdd2ef9bb6fc2c87efa167d13b7954e56de84dec3" exitCode=0 Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.338422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nkhcz" event={"ID":"8958fdf9-254c-4b91-aec6-b49c39d63856","Type":"ContainerDied","Data":"682e6fd791d4a8ebd8af706cdd2ef9bb6fc2c87efa167d13b7954e56de84dec3"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.342838 4796 generic.go:334] "Generic (PLEG): container finished" podID="58a4043e-72a9-434f-9a65-f64d0e6846df" containerID="a22573f52dc6d40f8511df4a5192727f699b594c99a1abc0d5714a29c94977e6" exitCode=0 Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.342949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" event={"ID":"58a4043e-72a9-434f-9a65-f64d0e6846df","Type":"ContainerDied","Data":"a22573f52dc6d40f8511df4a5192727f699b594c99a1abc0d5714a29c94977e6"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.349263 4796 generic.go:334] "Generic (PLEG): container finished" podID="f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" containerID="d5a4fd0b12bf466f4cdb65c8b519f0984a856710fff5f6dbbea0533939734875" exitCode=0 Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.349395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" event={"ID":"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e","Type":"ContainerDied","Data":"d5a4fd0b12bf466f4cdb65c8b519f0984a856710fff5f6dbbea0533939734875"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.365231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerStarted","Data":"a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.377276 4796 generic.go:334] "Generic (PLEG): container finished" podID="26991dea-36e1-4038-9a9f-112a834e81dd" containerID="3d097becea6f0bee786581278d069985871d9bfdae627ef9fef1b839bfced863" exitCode=0 Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.377393 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sjlzk" event={"ID":"26991dea-36e1-4038-9a9f-112a834e81dd","Type":"ContainerDied","Data":"3d097becea6f0bee786581278d069985871d9bfdae627ef9fef1b839bfced863"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.393273 4796 generic.go:334] "Generic (PLEG): container finished" podID="cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" containerID="184e71120c10bc2b9fdd3653138d949cc7fb487dd5925bc8228a0e4838ac6020" exitCode=0 Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.393375 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bpw4k" event={"ID":"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090","Type":"ContainerDied","Data":"184e71120c10bc2b9fdd3653138d949cc7fb487dd5925bc8228a0e4838ac6020"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.403757 4796 generic.go:334] "Generic (PLEG): container finished" podID="b5503fc1-7260-47f1-ac1c-97650aacd9d8" containerID="37d90340a70c1697621ff16565f0bdcc984efa76ef1ed350bacfd488c885ece2" exitCode=0 Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.404005 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" event={"ID":"b5503fc1-7260-47f1-ac1c-97650aacd9d8","Type":"ContainerDied","Data":"37d90340a70c1697621ff16565f0bdcc984efa76ef1ed350bacfd488c885ece2"} Dec 12 04:54:39 crc kubenswrapper[4796]: I1212 04:54:39.404081 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" event={"ID":"b5503fc1-7260-47f1-ac1c-97650aacd9d8","Type":"ContainerStarted","Data":"0b6b315ac0c81e0677d195c3a63870d0e522bc1b6cabc9129e18f0c003d0449d"} Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.216796 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.217131 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.254661 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.255462 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.415853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerStarted","Data":"6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9"} Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.416652 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:40 crc kubenswrapper[4796]: I1212 04:54:40.416692 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.049562 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.072800 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngvj\" (UniqueName: \"kubernetes.io/projected/58a4043e-72a9-434f-9a65-f64d0e6846df-kube-api-access-nngvj\") pod \"58a4043e-72a9-434f-9a65-f64d0e6846df\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.072901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a4043e-72a9-434f-9a65-f64d0e6846df-operator-scripts\") pod \"58a4043e-72a9-434f-9a65-f64d0e6846df\" (UID: \"58a4043e-72a9-434f-9a65-f64d0e6846df\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.074741 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a4043e-72a9-434f-9a65-f64d0e6846df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58a4043e-72a9-434f-9a65-f64d0e6846df" (UID: "58a4043e-72a9-434f-9a65-f64d0e6846df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.080556 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a4043e-72a9-434f-9a65-f64d0e6846df-kube-api-access-nngvj" (OuterVolumeSpecName: "kube-api-access-nngvj") pod "58a4043e-72a9-434f-9a65-f64d0e6846df" (UID: "58a4043e-72a9-434f-9a65-f64d0e6846df"). InnerVolumeSpecName "kube-api-access-nngvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.176178 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a4043e-72a9-434f-9a65-f64d0e6846df-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.176216 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngvj\" (UniqueName: \"kubernetes.io/projected/58a4043e-72a9-434f-9a65-f64d0e6846df-kube-api-access-nngvj\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.236000 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.236575 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.351693 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.435809 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.479732 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.485131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8958fdf9-254c-4b91-aec6-b49c39d63856-operator-scripts\") pod \"8958fdf9-254c-4b91-aec6-b49c39d63856\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.488939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjtr7\" (UniqueName: \"kubernetes.io/projected/8958fdf9-254c-4b91-aec6-b49c39d63856-kube-api-access-hjtr7\") pod \"8958fdf9-254c-4b91-aec6-b49c39d63856\" (UID: \"8958fdf9-254c-4b91-aec6-b49c39d63856\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.500886 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8958fdf9-254c-4b91-aec6-b49c39d63856-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8958fdf9-254c-4b91-aec6-b49c39d63856" (UID: "8958fdf9-254c-4b91-aec6-b49c39d63856"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.526652 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.538304 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.541192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" event={"ID":"58a4043e-72a9-434f-9a65-f64d0e6846df","Type":"ContainerDied","Data":"650213f560c88f0cbfcc81dcd287e2634990401de74db4aa1cd1d293b5fb74de"} Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.541218 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="650213f560c88f0cbfcc81dcd287e2634990401de74db4aa1cd1d293b5fb74de" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.541295 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3c59-account-create-update-x8hfx" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.613944 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26991dea-36e1-4038-9a9f-112a834e81dd-operator-scripts\") pod \"26991dea-36e1-4038-9a9f-112a834e81dd\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.614466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thfvk\" (UniqueName: \"kubernetes.io/projected/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-kube-api-access-thfvk\") pod \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.614499 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-operator-scripts\") pod \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\" (UID: \"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.614523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hlmg\" (UniqueName: \"kubernetes.io/projected/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-kube-api-access-7hlmg\") pod \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.614613 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-operator-scripts\") pod \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\" (UID: \"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.614649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gld8\" (UniqueName: \"kubernetes.io/projected/26991dea-36e1-4038-9a9f-112a834e81dd-kube-api-access-9gld8\") pod \"26991dea-36e1-4038-9a9f-112a834e81dd\" (UID: \"26991dea-36e1-4038-9a9f-112a834e81dd\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.618085 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" (UID: "cbd1f195-1eb4-41ed-a548-4f3e7e0d5090"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.618609 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26991dea-36e1-4038-9a9f-112a834e81dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26991dea-36e1-4038-9a9f-112a834e81dd" (UID: "26991dea-36e1-4038-9a9f-112a834e81dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.620010 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8958fdf9-254c-4b91-aec6-b49c39d63856-kube-api-access-hjtr7" (OuterVolumeSpecName: "kube-api-access-hjtr7") pod "8958fdf9-254c-4b91-aec6-b49c39d63856" (UID: "8958fdf9-254c-4b91-aec6-b49c39d63856"). InnerVolumeSpecName "kube-api-access-hjtr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.620332 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.621170 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" event={"ID":"f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e","Type":"ContainerDied","Data":"44978376a828b6bc5ffc6982d150341b9bb8701e3c1e5cf9bb3e88fd95f18379"} Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.621195 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44978376a828b6bc5ffc6982d150341b9bb8701e3c1e5cf9bb3e88fd95f18379" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.621240 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ffa-account-create-update-sbpcq" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.624830 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" (UID: "f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.625328 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648484 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26991dea-36e1-4038-9a9f-112a834e81dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648516 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8958fdf9-254c-4b91-aec6-b49c39d63856-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648530 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjtr7\" (UniqueName: \"kubernetes.io/projected/8958fdf9-254c-4b91-aec6-b49c39d63856-kube-api-access-hjtr7\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648542 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648558 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648698 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjlzk" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sjlzk" event={"ID":"26991dea-36e1-4038-9a9f-112a834e81dd","Type":"ContainerDied","Data":"66f3a8b6a51e92537bc5d716e1396da95f22744e5dee0be5717c71f3c143ea64"} Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.648796 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f3a8b6a51e92537bc5d716e1396da95f22744e5dee0be5717c71f3c143ea64" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.651708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bpw4k" event={"ID":"cbd1f195-1eb4-41ed-a548-4f3e7e0d5090","Type":"ContainerDied","Data":"d8eba465bd8e48adef7591c53ad74803fa9313ca3c7b14177fab173a21d9bedc"} Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.651740 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8eba465bd8e48adef7591c53ad74803fa9313ca3c7b14177fab173a21d9bedc" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.651905 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bpw4k" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.681664 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nkhcz" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.682679 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nkhcz" event={"ID":"8958fdf9-254c-4b91-aec6-b49c39d63856","Type":"ContainerDied","Data":"256493ddf7981f9d791b707055b063f4e4ab7c76750dee6894896ff399b2e98d"} Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.682719 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="256493ddf7981f9d791b707055b063f4e4ab7c76750dee6894896ff399b2e98d" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.684616 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.684676 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.702034 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-kube-api-access-thfvk" (OuterVolumeSpecName: "kube-api-access-thfvk") pod "cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" (UID: "cbd1f195-1eb4-41ed-a548-4f3e7e0d5090"). InnerVolumeSpecName "kube-api-access-thfvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.722864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-kube-api-access-7hlmg" (OuterVolumeSpecName: "kube-api-access-7hlmg") pod "f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" (UID: "f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e"). InnerVolumeSpecName "kube-api-access-7hlmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.724788 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26991dea-36e1-4038-9a9f-112a834e81dd-kube-api-access-9gld8" (OuterVolumeSpecName: "kube-api-access-9gld8") pod "26991dea-36e1-4038-9a9f-112a834e81dd" (UID: "26991dea-36e1-4038-9a9f-112a834e81dd"). InnerVolumeSpecName "kube-api-access-9gld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.750160 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5503fc1-7260-47f1-ac1c-97650aacd9d8-operator-scripts\") pod \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.750476 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb5gd\" (UniqueName: \"kubernetes.io/projected/b5503fc1-7260-47f1-ac1c-97650aacd9d8-kube-api-access-zb5gd\") pod \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\" (UID: \"b5503fc1-7260-47f1-ac1c-97650aacd9d8\") " Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.754852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5503fc1-7260-47f1-ac1c-97650aacd9d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5503fc1-7260-47f1-ac1c-97650aacd9d8" (UID: "b5503fc1-7260-47f1-ac1c-97650aacd9d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.757041 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thfvk\" (UniqueName: \"kubernetes.io/projected/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090-kube-api-access-thfvk\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.757064 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hlmg\" (UniqueName: \"kubernetes.io/projected/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e-kube-api-access-7hlmg\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.757074 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5503fc1-7260-47f1-ac1c-97650aacd9d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.757084 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gld8\" (UniqueName: \"kubernetes.io/projected/26991dea-36e1-4038-9a9f-112a834e81dd-kube-api-access-9gld8\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.760467 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5503fc1-7260-47f1-ac1c-97650aacd9d8-kube-api-access-zb5gd" (OuterVolumeSpecName: "kube-api-access-zb5gd") pod "b5503fc1-7260-47f1-ac1c-97650aacd9d8" (UID: "b5503fc1-7260-47f1-ac1c-97650aacd9d8"). InnerVolumeSpecName "kube-api-access-zb5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:41 crc kubenswrapper[4796]: I1212 04:54:41.858494 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb5gd\" (UniqueName: \"kubernetes.io/projected/b5503fc1-7260-47f1-ac1c-97650aacd9d8-kube-api-access-zb5gd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:42 crc kubenswrapper[4796]: I1212 04:54:42.691331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerStarted","Data":"f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e"} Dec 12 04:54:42 crc kubenswrapper[4796]: I1212 04:54:42.692943 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 04:54:42 crc kubenswrapper[4796]: I1212 04:54:42.694665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" event={"ID":"b5503fc1-7260-47f1-ac1c-97650aacd9d8","Type":"ContainerDied","Data":"0b6b315ac0c81e0677d195c3a63870d0e522bc1b6cabc9129e18f0c003d0449d"} Dec 12 04:54:42 crc kubenswrapper[4796]: I1212 04:54:42.694693 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6b315ac0c81e0677d195c3a63870d0e522bc1b6cabc9129e18f0c003d0449d" Dec 12 04:54:42 crc kubenswrapper[4796]: I1212 04:54:42.694726 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9a93-account-create-update-6wjp2" Dec 12 04:54:43 crc kubenswrapper[4796]: I1212 04:54:43.702175 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:54:43 crc kubenswrapper[4796]: I1212 04:54:43.702757 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:54:44 crc kubenswrapper[4796]: I1212 04:54:44.652931 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:44 crc kubenswrapper[4796]: I1212 04:54:44.653247 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:54:44 crc kubenswrapper[4796]: I1212 04:54:44.654628 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 12 04:54:44 crc kubenswrapper[4796]: I1212 04:54:44.675143 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.083304039 podStartE2EDuration="9.675126162s" podCreationTimestamp="2025-12-12 04:54:35 +0000 UTC" firstStartedPulling="2025-12-12 04:54:36.1045232 +0000 UTC m=+1266.980540347" lastFinishedPulling="2025-12-12 04:54:40.696345323 +0000 UTC m=+1271.572362470" observedRunningTime="2025-12-12 04:54:42.725331491 +0000 UTC m=+1273.601348638" watchObservedRunningTime="2025-12-12 04:54:44.675126162 +0000 UTC m=+1275.551143299" Dec 12 04:54:44 crc kubenswrapper[4796]: I1212 04:54:44.889729 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 04:54:44 crc kubenswrapper[4796]: I1212 04:54:44.889814 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 04:54:45 crc kubenswrapper[4796]: I1212 04:54:45.675144 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 12 04:54:46 crc kubenswrapper[4796]: I1212 04:54:46.029766 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.020550 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.096394 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.230765 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fwxjx"] Dec 12 04:54:47 crc kubenswrapper[4796]: E1212 04:54:47.231121 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26991dea-36e1-4038-9a9f-112a834e81dd" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231136 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="26991dea-36e1-4038-9a9f-112a834e81dd" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: E1212 04:54:47.231147 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a4043e-72a9-434f-9a65-f64d0e6846df" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231153 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a4043e-72a9-434f-9a65-f64d0e6846df" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: E1212 04:54:47.231185 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231193 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: E1212 04:54:47.231201 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231207 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: E1212 04:54:47.231216 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8958fdf9-254c-4b91-aec6-b49c39d63856" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231222 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8958fdf9-254c-4b91-aec6-b49c39d63856" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: E1212 04:54:47.231234 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5503fc1-7260-47f1-ac1c-97650aacd9d8" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231240 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5503fc1-7260-47f1-ac1c-97650aacd9d8" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231410 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a4043e-72a9-434f-9a65-f64d0e6846df" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231419 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231430 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8958fdf9-254c-4b91-aec6-b49c39d63856" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231441 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231451 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5503fc1-7260-47f1-ac1c-97650aacd9d8" containerName="mariadb-account-create-update" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.231461 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="26991dea-36e1-4038-9a9f-112a834e81dd" containerName="mariadb-database-create" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.232000 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.234953 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tr5sb" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.236130 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.241925 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.244146 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fwxjx"] Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.350552 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.350596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flmmm\" (UniqueName: \"kubernetes.io/projected/2b790398-6b23-4d61-9ad2-a79b868ad057-kube-api-access-flmmm\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.350730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-config-data\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.350883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-scripts\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.453276 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-config-data\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.453368 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-scripts\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.453517 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.453546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flmmm\" (UniqueName: \"kubernetes.io/projected/2b790398-6b23-4d61-9ad2-a79b868ad057-kube-api-access-flmmm\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.459247 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-scripts\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.461480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-config-data\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.468346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.477075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flmmm\" (UniqueName: \"kubernetes.io/projected/2b790398-6b23-4d61-9ad2-a79b868ad057-kube-api-access-flmmm\") pod \"nova-cell0-conductor-db-sync-fwxjx\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:47 crc kubenswrapper[4796]: I1212 04:54:47.552915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:54:48 crc kubenswrapper[4796]: I1212 04:54:48.231630 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fwxjx"] Dec 12 04:54:48 crc kubenswrapper[4796]: I1212 04:54:48.740226 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" event={"ID":"2b790398-6b23-4d61-9ad2-a79b868ad057","Type":"ContainerStarted","Data":"89f8303b03c152b623e995ebe338082180d1194039827cdc6e700225c576681f"} Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.125711 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.126336 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-central-agent" containerID="cri-o://0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c" gracePeriod=30 Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.127031 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="proxy-httpd" containerID="cri-o://f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e" gracePeriod=30 Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.127084 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="sg-core" containerID="cri-o://6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9" gracePeriod=30 Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.127117 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-notification-agent" containerID="cri-o://a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5" gracePeriod=30 Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.154562 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.839658 4796 generic.go:334] "Generic (PLEG): container finished" podID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerID="f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e" exitCode=0 Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.839919 4796 generic.go:334] "Generic (PLEG): container finished" podID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerID="6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9" exitCode=2 Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.839756 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerDied","Data":"f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e"} Dec 12 04:54:51 crc kubenswrapper[4796]: I1212 04:54:51.839954 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerDied","Data":"6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9"} Dec 12 04:54:52 crc kubenswrapper[4796]: I1212 04:54:52.854078 4796 generic.go:334] "Generic (PLEG): container finished" podID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerID="a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5" exitCode=0 Dec 12 04:54:52 crc kubenswrapper[4796]: I1212 04:54:52.854251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerDied","Data":"a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5"} Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.310533 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483056 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-run-httpd\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-log-httpd\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483135 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-sg-core-conf-yaml\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-config-data\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483203 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-combined-ca-bundle\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483235 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-scripts\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.483258 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhw8l\" (UniqueName: \"kubernetes.io/projected/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-kube-api-access-hhw8l\") pod \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\" (UID: \"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f\") " Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.484638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.484872 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.503107 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-kube-api-access-hhw8l" (OuterVolumeSpecName: "kube-api-access-hhw8l") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "kube-api-access-hhw8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.509053 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-scripts" (OuterVolumeSpecName: "scripts") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.578699 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.588720 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.588763 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.588775 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.588787 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.588798 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhw8l\" (UniqueName: \"kubernetes.io/projected/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-kube-api-access-hhw8l\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.623582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.691407 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.705967 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-config-data" (OuterVolumeSpecName: "config-data") pod "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" (UID: "67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.793433 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.868927 4796 generic.go:334] "Generic (PLEG): container finished" podID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerID="0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c" exitCode=0 Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.868975 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerDied","Data":"0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c"} Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.869031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f","Type":"ContainerDied","Data":"ff1f7984c005a3d077c715c4c95110c75f37dcce3f397c6a768896dcce57a13c"} Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.869056 4796 scope.go:117] "RemoveContainer" containerID="f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.869215 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:53 crc kubenswrapper[4796]: I1212 04:54:53.973778 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.006536 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.017347 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:54 crc kubenswrapper[4796]: E1212 04:54:54.018316 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="proxy-httpd" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.018342 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="proxy-httpd" Dec 12 04:54:54 crc kubenswrapper[4796]: E1212 04:54:54.018367 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-central-agent" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.018375 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-central-agent" Dec 12 04:54:54 crc kubenswrapper[4796]: E1212 04:54:54.018408 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="sg-core" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.018418 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="sg-core" Dec 12 04:54:54 crc kubenswrapper[4796]: E1212 04:54:54.018435 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-notification-agent" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.018443 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-notification-agent" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.019004 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="proxy-httpd" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.019047 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-notification-agent" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.019059 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="ceilometer-central-agent" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.019105 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" containerName="sg-core" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.059730 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.059811 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.066902 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.067174 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.143330 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:54 crc kubenswrapper[4796]: E1212 04:54:54.143939 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-xkzhh log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="f031a3df-54d5-410d-ac4a-33fa8a00fb53" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.218775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-scripts\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.218840 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzhh\" (UniqueName: \"kubernetes.io/projected/f031a3df-54d5-410d-ac4a-33fa8a00fb53-kube-api-access-xkzhh\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.218864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.218886 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.218914 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-config-data\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.218983 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-log-httpd\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.219004 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-run-httpd\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-log-httpd\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-run-httpd\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-scripts\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzhh\" (UniqueName: \"kubernetes.io/projected/f031a3df-54d5-410d-ac4a-33fa8a00fb53-kube-api-access-xkzhh\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.320672 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-config-data\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.321577 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-log-httpd\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.322868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-run-httpd\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.331088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-config-data\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.335567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-scripts\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.336925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.342061 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzhh\" (UniqueName: \"kubernetes.io/projected/f031a3df-54d5-410d-ac4a-33fa8a00fb53-kube-api-access-xkzhh\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.353805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.879013 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:54 crc kubenswrapper[4796]: I1212 04:54:54.890566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.035430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-scripts\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.035486 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-config-data\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.035540 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-log-httpd\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.035581 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzhh\" (UniqueName: \"kubernetes.io/projected/f031a3df-54d5-410d-ac4a-33fa8a00fb53-kube-api-access-xkzhh\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.035643 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-run-httpd\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.035864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.036067 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.036266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-sg-core-conf-yaml\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.039532 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-combined-ca-bundle\") pod \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\" (UID: \"f031a3df-54d5-410d-ac4a-33fa8a00fb53\") " Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.040487 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.040513 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f031a3df-54d5-410d-ac4a-33fa8a00fb53-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.041562 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-scripts" (OuterVolumeSpecName: "scripts") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.041573 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f031a3df-54d5-410d-ac4a-33fa8a00fb53-kube-api-access-xkzhh" (OuterVolumeSpecName: "kube-api-access-xkzhh") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "kube-api-access-xkzhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.042373 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.044412 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.048440 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-config-data" (OuterVolumeSpecName: "config-data") pod "f031a3df-54d5-410d-ac4a-33fa8a00fb53" (UID: "f031a3df-54d5-410d-ac4a-33fa8a00fb53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.142494 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.142528 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.142539 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.142548 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f031a3df-54d5-410d-ac4a-33fa8a00fb53-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.142557 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkzhh\" (UniqueName: \"kubernetes.io/projected/f031a3df-54d5-410d-ac4a-33fa8a00fb53-kube-api-access-xkzhh\") on node \"crc\" DevicePath \"\"" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.421186 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f" path="/var/lib/kubelet/pods/67ff40e8-fd0c-48bf-9fb4-51a0d1740c0f/volumes" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.886530 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.959866 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.975015 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:55 crc kubenswrapper[4796]: I1212 04:54:55.993102 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.005207 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.012734 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.012894 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.022093 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164122 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-config-data\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164193 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-run-httpd\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-scripts\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164286 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-log-httpd\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.164354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvz7\" (UniqueName: \"kubernetes.io/projected/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-kube-api-access-dlvz7\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-scripts\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265779 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-log-httpd\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlvz7\" (UniqueName: \"kubernetes.io/projected/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-kube-api-access-dlvz7\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-config-data\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.265947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-run-httpd\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.266835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-run-httpd\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.266983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-log-httpd\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.281396 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.281545 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-config-data\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.284057 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-scripts\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.285597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlvz7\" (UniqueName: \"kubernetes.io/projected/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-kube-api-access-dlvz7\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.287404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " pod="openstack/ceilometer-0" Dec 12 04:54:56 crc kubenswrapper[4796]: I1212 04:54:56.344698 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.018909 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.019067 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.020119 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d9a41c51a02fbfc7b83df3206185605b2f04324f46c56506c9aab25a48af1d31"} pod="openstack/horizon-6cb55bccb4-z8p6q" containerMessage="Container horizon failed startup probe, will be restarted" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.020186 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" containerID="cri-o://d9a41c51a02fbfc7b83df3206185605b2f04324f46c56506c9aab25a48af1d31" gracePeriod=30 Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.095326 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.095636 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.096465 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"b50dcc280f6394fbab162fdb44d787620aad63c8ea6483a45866f68fc3afb35a"} pod="openstack/horizon-67764d6b9b-h7fdk" containerMessage="Container horizon failed startup probe, will be restarted" Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.096503 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" containerID="cri-o://b50dcc280f6394fbab162fdb44d787620aad63c8ea6483a45866f68fc3afb35a" gracePeriod=30 Dec 12 04:54:57 crc kubenswrapper[4796]: I1212 04:54:57.427539 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f031a3df-54d5-410d-ac4a-33fa8a00fb53" path="/var/lib/kubelet/pods/f031a3df-54d5-410d-ac4a-33fa8a00fb53/volumes" Dec 12 04:54:59 crc kubenswrapper[4796]: I1212 04:54:59.746062 4796 scope.go:117] "RemoveContainer" containerID="6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9" Dec 12 04:54:59 crc kubenswrapper[4796]: I1212 04:54:59.841644 4796 scope.go:117] "RemoveContainer" containerID="a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5" Dec 12 04:54:59 crc kubenswrapper[4796]: I1212 04:54:59.956311 4796 scope.go:117] "RemoveContainer" containerID="0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.002680 4796 scope.go:117] "RemoveContainer" containerID="f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e" Dec 12 04:55:00 crc kubenswrapper[4796]: E1212 04:55:00.003714 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e\": container with ID starting with f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e not found: ID does not exist" containerID="f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.003746 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e"} err="failed to get container status \"f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e\": rpc error: code = NotFound desc = could not find container \"f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e\": container with ID starting with f27e24e7adf9ea21f3dc5ec234fe5edde7b62de0c6c210c0eb2e962fdf32785e not found: ID does not exist" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.003769 4796 scope.go:117] "RemoveContainer" containerID="6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9" Dec 12 04:55:00 crc kubenswrapper[4796]: E1212 04:55:00.006322 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9\": container with ID starting with 6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9 not found: ID does not exist" containerID="6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.006357 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9"} err="failed to get container status \"6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9\": rpc error: code = NotFound desc = could not find container \"6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9\": container with ID starting with 6b1b0972f2fa3d33aeddeff4765461cbc38f0d79bea0421653acc97c2fad12e9 not found: ID does not exist" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.006418 4796 scope.go:117] "RemoveContainer" containerID="a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5" Dec 12 04:55:00 crc kubenswrapper[4796]: E1212 04:55:00.006713 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5\": container with ID starting with a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5 not found: ID does not exist" containerID="a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.006761 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5"} err="failed to get container status \"a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5\": rpc error: code = NotFound desc = could not find container \"a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5\": container with ID starting with a64b1f96a16a813047b75b48bc92cc834b81c5e8839c2c2361650b650484aee5 not found: ID does not exist" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.006781 4796 scope.go:117] "RemoveContainer" containerID="0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c" Dec 12 04:55:00 crc kubenswrapper[4796]: E1212 04:55:00.007038 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c\": container with ID starting with 0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c not found: ID does not exist" containerID="0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c" Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.007088 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c"} err="failed to get container status \"0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c\": rpc error: code = NotFound desc = could not find container \"0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c\": container with ID starting with 0e91ad4196f82743591341991aba8df17023e1f30bbee21adbce36da8e91a25c not found: ID does not exist" Dec 12 04:55:00 crc kubenswrapper[4796]: W1212 04:55:00.300708 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacf5cec4_54d0_4c21_8b0a_e03ac14d2218.slice/crio-e5823174e3c66470b89569349297124e2fc23816e10ca248129fccaa40c284ec WatchSource:0}: Error finding container e5823174e3c66470b89569349297124e2fc23816e10ca248129fccaa40c284ec: Status 404 returned error can't find the container with id e5823174e3c66470b89569349297124e2fc23816e10ca248129fccaa40c284ec Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.301549 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.303726 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.975555 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerStarted","Data":"e5823174e3c66470b89569349297124e2fc23816e10ca248129fccaa40c284ec"} Dec 12 04:55:00 crc kubenswrapper[4796]: I1212 04:55:00.987766 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" event={"ID":"2b790398-6b23-4d61-9ad2-a79b868ad057","Type":"ContainerStarted","Data":"905595ae0f665d543c39961899db65f640913641eab0ccf7cfb57037233c6261"} Dec 12 04:55:01 crc kubenswrapper[4796]: I1212 04:55:01.014496 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" podStartSLOduration=2.353331396 podStartE2EDuration="14.014453932s" podCreationTimestamp="2025-12-12 04:54:47 +0000 UTC" firstStartedPulling="2025-12-12 04:54:48.197114322 +0000 UTC m=+1279.073131469" lastFinishedPulling="2025-12-12 04:54:59.858236858 +0000 UTC m=+1290.734254005" observedRunningTime="2025-12-12 04:55:01.005224433 +0000 UTC m=+1291.881241580" watchObservedRunningTime="2025-12-12 04:55:01.014453932 +0000 UTC m=+1291.890471079" Dec 12 04:55:02 crc kubenswrapper[4796]: I1212 04:55:02.025925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerStarted","Data":"526cf792cac3c81fc5a4091ca508881c8e7ae10518434e453b9fbdfc72e4311c"} Dec 12 04:55:04 crc kubenswrapper[4796]: I1212 04:55:04.041363 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerStarted","Data":"8e796d734b40e4a2c357b550d387f6681692face2bf7bf3def83a38ca770c746"} Dec 12 04:55:05 crc kubenswrapper[4796]: I1212 04:55:05.061303 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerStarted","Data":"3715d5366680c08a1a86b4f804ea4cd6e55e045fcd9d445a031dd4a27a9b277d"} Dec 12 04:55:06 crc kubenswrapper[4796]: I1212 04:55:06.073460 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerStarted","Data":"ae8a93c58a70b355f2b6c7d19d5c1270fdf2e094b9c337b7f619c585eb84d80a"} Dec 12 04:55:06 crc kubenswrapper[4796]: I1212 04:55:06.074481 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 04:55:06 crc kubenswrapper[4796]: I1212 04:55:06.097272 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.214136566 podStartE2EDuration="11.097243633s" podCreationTimestamp="2025-12-12 04:54:55 +0000 UTC" firstStartedPulling="2025-12-12 04:55:00.303244409 +0000 UTC m=+1291.179261556" lastFinishedPulling="2025-12-12 04:55:05.186351476 +0000 UTC m=+1296.062368623" observedRunningTime="2025-12-12 04:55:06.095554691 +0000 UTC m=+1296.971571878" watchObservedRunningTime="2025-12-12 04:55:06.097243633 +0000 UTC m=+1296.973260810" Dec 12 04:55:11 crc kubenswrapper[4796]: I1212 04:55:11.160972 4796 generic.go:334] "Generic (PLEG): container finished" podID="2b790398-6b23-4d61-9ad2-a79b868ad057" containerID="905595ae0f665d543c39961899db65f640913641eab0ccf7cfb57037233c6261" exitCode=0 Dec 12 04:55:11 crc kubenswrapper[4796]: I1212 04:55:11.161081 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" event={"ID":"2b790398-6b23-4d61-9ad2-a79b868ad057","Type":"ContainerDied","Data":"905595ae0f665d543c39961899db65f640913641eab0ccf7cfb57037233c6261"} Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.509409 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.685941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-scripts\") pod \"2b790398-6b23-4d61-9ad2-a79b868ad057\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.686037 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-config-data\") pod \"2b790398-6b23-4d61-9ad2-a79b868ad057\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.686705 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flmmm\" (UniqueName: \"kubernetes.io/projected/2b790398-6b23-4d61-9ad2-a79b868ad057-kube-api-access-flmmm\") pod \"2b790398-6b23-4d61-9ad2-a79b868ad057\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.686862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-combined-ca-bundle\") pod \"2b790398-6b23-4d61-9ad2-a79b868ad057\" (UID: \"2b790398-6b23-4d61-9ad2-a79b868ad057\") " Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.693386 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-scripts" (OuterVolumeSpecName: "scripts") pod "2b790398-6b23-4d61-9ad2-a79b868ad057" (UID: "2b790398-6b23-4d61-9ad2-a79b868ad057"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.695474 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b790398-6b23-4d61-9ad2-a79b868ad057-kube-api-access-flmmm" (OuterVolumeSpecName: "kube-api-access-flmmm") pod "2b790398-6b23-4d61-9ad2-a79b868ad057" (UID: "2b790398-6b23-4d61-9ad2-a79b868ad057"). InnerVolumeSpecName "kube-api-access-flmmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.721709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b790398-6b23-4d61-9ad2-a79b868ad057" (UID: "2b790398-6b23-4d61-9ad2-a79b868ad057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.723817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-config-data" (OuterVolumeSpecName: "config-data") pod "2b790398-6b23-4d61-9ad2-a79b868ad057" (UID: "2b790398-6b23-4d61-9ad2-a79b868ad057"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.788676 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.789496 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.789576 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flmmm\" (UniqueName: \"kubernetes.io/projected/2b790398-6b23-4d61-9ad2-a79b868ad057-kube-api-access-flmmm\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:12 crc kubenswrapper[4796]: I1212 04:55:12.789657 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b790398-6b23-4d61-9ad2-a79b868ad057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.184354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" event={"ID":"2b790398-6b23-4d61-9ad2-a79b868ad057","Type":"ContainerDied","Data":"89f8303b03c152b623e995ebe338082180d1194039827cdc6e700225c576681f"} Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.184392 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f8303b03c152b623e995ebe338082180d1194039827cdc6e700225c576681f" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.184465 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fwxjx" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.312427 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 12 04:55:13 crc kubenswrapper[4796]: E1212 04:55:13.312950 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b790398-6b23-4d61-9ad2-a79b868ad057" containerName="nova-cell0-conductor-db-sync" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.312976 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b790398-6b23-4d61-9ad2-a79b868ad057" containerName="nova-cell0-conductor-db-sync" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.313312 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b790398-6b23-4d61-9ad2-a79b868ad057" containerName="nova-cell0-conductor-db-sync" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.314235 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.318066 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tr5sb" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.318499 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.344092 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.401108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.401173 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.401202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9xd\" (UniqueName: \"kubernetes.io/projected/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-kube-api-access-8n9xd\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.503051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.503136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.503161 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9xd\" (UniqueName: \"kubernetes.io/projected/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-kube-api-access-8n9xd\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.513227 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.513260 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.522035 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9xd\" (UniqueName: \"kubernetes.io/projected/cd3c1c91-f0c1-4dd0-b23e-227f1353858a-kube-api-access-8n9xd\") pod \"nova-cell0-conductor-0\" (UID: \"cd3c1c91-f0c1-4dd0-b23e-227f1353858a\") " pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:13 crc kubenswrapper[4796]: I1212 04:55:13.647486 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:14 crc kubenswrapper[4796]: I1212 04:55:14.113893 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 12 04:55:14 crc kubenswrapper[4796]: I1212 04:55:14.194453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd3c1c91-f0c1-4dd0-b23e-227f1353858a","Type":"ContainerStarted","Data":"9d69892e1f59e791aed0ddca3ec3830d7c2271996e8f568bbb6cecc542362b86"} Dec 12 04:55:15 crc kubenswrapper[4796]: I1212 04:55:15.203946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd3c1c91-f0c1-4dd0-b23e-227f1353858a","Type":"ContainerStarted","Data":"1653b0ee64bacf05fcb9a1e91f225f90fc9d28412948c6d2e23abcc086a2f12a"} Dec 12 04:55:15 crc kubenswrapper[4796]: I1212 04:55:15.205452 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:15 crc kubenswrapper[4796]: I1212 04:55:15.220835 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.220816912 podStartE2EDuration="2.220816912s" podCreationTimestamp="2025-12-12 04:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:15.218755688 +0000 UTC m=+1306.094772835" watchObservedRunningTime="2025-12-12 04:55:15.220816912 +0000 UTC m=+1306.096834069" Dec 12 04:55:23 crc kubenswrapper[4796]: I1212 04:55:23.681131 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.227073 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jt82w"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.228459 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.231102 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.246352 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jt82w"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.249844 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.345351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.345548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-config-data\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.345594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-scripts\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.345665 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbfxn\" (UniqueName: \"kubernetes.io/projected/0197fef8-4748-4ef6-a3cd-b038975d8882-kube-api-access-xbfxn\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.444817 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.446706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-config-data\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.446754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-scripts\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.446807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbfxn\" (UniqueName: \"kubernetes.io/projected/0197fef8-4748-4ef6-a3cd-b038975d8882-kube-api-access-xbfxn\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.446838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.449716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.453382 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.456861 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.456893 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-scripts\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.458026 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-config-data\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.471024 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.491814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbfxn\" (UniqueName: \"kubernetes.io/projected/0197fef8-4748-4ef6-a3cd-b038975d8882-kube-api-access-xbfxn\") pod \"nova-cell0-cell-mapping-jt82w\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.534563 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.542447 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.545077 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.553696 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.555499 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.562444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.562513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7f5j\" (UniqueName: \"kubernetes.io/projected/08f981fe-4957-44c0-86f6-53f08c41b746-kube-api-access-m7f5j\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.575719 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.667464 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.667579 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.667602 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87lw9\" (UniqueName: \"kubernetes.io/projected/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-kube-api-access-87lw9\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.667661 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.667688 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7f5j\" (UniqueName: \"kubernetes.io/projected/08f981fe-4957-44c0-86f6-53f08c41b746-kube-api-access-m7f5j\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.667710 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-config-data\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.675348 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.676896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.714028 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.723455 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.739882 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.746130 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7f5j\" (UniqueName: \"kubernetes.io/projected/08f981fe-4957-44c0-86f6-53f08c41b746-kube-api-access-m7f5j\") pod \"nova-cell1-novncproxy-0\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.769638 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-config-data\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.770136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.770325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87lw9\" (UniqueName: \"kubernetes.io/projected/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-kube-api-access-87lw9\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.778258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.790389 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.799849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-config-data\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.801673 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.803664 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.821353 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.821819 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87lw9\" (UniqueName: \"kubernetes.io/projected/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-kube-api-access-87lw9\") pod \"nova-scheduler-0\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.833863 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.865692 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.873378 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.873416 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-config-data\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.873439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8bl\" (UniqueName: \"kubernetes.io/projected/061fe450-1581-4362-968e-59b480875649-kube-api-access-vr8bl\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.873455 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061fe450-1581-4362-968e-59b480875649-logs\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.884969 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.895846 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-gmxgh"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.908247 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.930355 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-gmxgh"] Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.977495 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-config-data\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.977579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.977639 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8bl\" (UniqueName: \"kubernetes.io/projected/061fe450-1581-4362-968e-59b480875649-kube-api-access-vr8bl\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.977670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061fe450-1581-4362-968e-59b480875649-logs\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.977909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2vc\" (UniqueName: \"kubernetes.io/projected/bc6a13f9-584c-4f07-b070-c79e4f585c4f-kube-api-access-cb2vc\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.977967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-config-data\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.978040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6a13f9-584c-4f07-b070-c79e4f585c4f-logs\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.978205 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:24 crc kubenswrapper[4796]: I1212 04:55:24.980453 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061fe450-1581-4362-968e-59b480875649-logs\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.028503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-config-data\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.029857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.032558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8bl\" (UniqueName: \"kubernetes.io/projected/061fe450-1581-4362-968e-59b480875649-kube-api-access-vr8bl\") pod \"nova-api-0\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " pod="openstack/nova-api-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6a13f9-584c-4f07-b070-c79e4f585c4f-logs\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-svc\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083553 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083574 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-config\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083614 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h569z\" (UniqueName: \"kubernetes.io/projected/9580c7ba-bc82-4bbb-b14b-d5d527390627-kube-api-access-h569z\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2vc\" (UniqueName: \"kubernetes.io/projected/bc6a13f9-584c-4f07-b070-c79e4f585c4f-kube-api-access-cb2vc\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083738 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.083758 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-config-data\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.085876 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6a13f9-584c-4f07-b070-c79e4f585c4f-logs\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.089365 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.091090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-config-data\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.108405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2vc\" (UniqueName: \"kubernetes.io/projected/bc6a13f9-584c-4f07-b070-c79e4f585c4f-kube-api-access-cb2vc\") pod \"nova-metadata-0\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.166743 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.185619 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.185685 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-svc\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.185732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.185749 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-config\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.185778 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h569z\" (UniqueName: \"kubernetes.io/projected/9580c7ba-bc82-4bbb-b14b-d5d527390627-kube-api-access-h569z\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.185801 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.186701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.187210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.187605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.188315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-config\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.190701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-svc\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.209661 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.221348 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h569z\" (UniqueName: \"kubernetes.io/projected/9580c7ba-bc82-4bbb-b14b-d5d527390627-kube-api-access-h569z\") pod \"dnsmasq-dns-bccf8f775-gmxgh\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.300825 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.450336 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jt82w"] Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.695479 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:55:25 crc kubenswrapper[4796]: W1212 04:55:25.705958 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08f981fe_4957_44c0_86f6_53f08c41b746.slice/crio-226b3ca1f61cd7721849c8cc6d16648c9c15936c0a3a8257d1c8ed47ac516d89 WatchSource:0}: Error finding container 226b3ca1f61cd7721849c8cc6d16648c9c15936c0a3a8257d1c8ed47ac516d89: Status 404 returned error can't find the container with id 226b3ca1f61cd7721849c8cc6d16648c9c15936c0a3a8257d1c8ed47ac516d89 Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.818268 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.869854 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tg6gj"] Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.871330 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.878578 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.878798 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.879253 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tg6gj"] Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.961961 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:55:25 crc kubenswrapper[4796]: I1212 04:55:25.988861 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.025045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.025081 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-scripts\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.025105 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8498\" (UniqueName: \"kubernetes.io/projected/b3077f41-167d-414c-9af4-05a03c32ab03-kube-api-access-h8498\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.025203 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-config-data\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.126480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-config-data\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.126603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.126620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-scripts\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.126639 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8498\" (UniqueName: \"kubernetes.io/projected/b3077f41-167d-414c-9af4-05a03c32ab03-kube-api-access-h8498\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.133311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-config-data\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.135693 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-scripts\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.136689 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.148840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8498\" (UniqueName: \"kubernetes.io/projected/b3077f41-167d-414c-9af4-05a03c32ab03-kube-api-access-h8498\") pod \"nova-cell1-conductor-db-sync-tg6gj\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.202763 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.243820 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-gmxgh"] Dec 12 04:55:26 crc kubenswrapper[4796]: W1212 04:55:26.255856 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9580c7ba_bc82_4bbb_b14b_d5d527390627.slice/crio-7495c93fb1c45ea994952ed89dc70c80f915baba89980585f578cf0c50edd91b WatchSource:0}: Error finding container 7495c93fb1c45ea994952ed89dc70c80f915baba89980585f578cf0c50edd91b: Status 404 returned error can't find the container with id 7495c93fb1c45ea994952ed89dc70c80f915baba89980585f578cf0c50edd91b Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.389821 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.439873 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jt82w" event={"ID":"0197fef8-4748-4ef6-a3cd-b038975d8882","Type":"ContainerStarted","Data":"5027cd938388feeef5402e43196b5e72000d5149d7a5ddd740a607f2331d72ca"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.439915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jt82w" event={"ID":"0197fef8-4748-4ef6-a3cd-b038975d8882","Type":"ContainerStarted","Data":"cfd259be08a8cf54bc7d095c9e0d41e29f6479289c6e53c46bf80a3441661690"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.453458 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d799207-c66a-4dba-b3f7-1b1ef0a7c339","Type":"ContainerStarted","Data":"361423ff39b7e92ee5ecda119478e9d1fe90791f55bd3b1776509713a4e1b09f"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.456792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08f981fe-4957-44c0-86f6-53f08c41b746","Type":"ContainerStarted","Data":"226b3ca1f61cd7721849c8cc6d16648c9c15936c0a3a8257d1c8ed47ac516d89"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.457714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" event={"ID":"9580c7ba-bc82-4bbb-b14b-d5d527390627","Type":"ContainerStarted","Data":"7495c93fb1c45ea994952ed89dc70c80f915baba89980585f578cf0c50edd91b"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.458647 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"061fe450-1581-4362-968e-59b480875649","Type":"ContainerStarted","Data":"c094763910e23ab8d3cc0b14026c1adac74bc35a895c2492f8149bde67734b49"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.466307 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jt82w" podStartSLOduration=2.466266397 podStartE2EDuration="2.466266397s" podCreationTimestamp="2025-12-12 04:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:26.460411054 +0000 UTC m=+1317.336428211" watchObservedRunningTime="2025-12-12 04:55:26.466266397 +0000 UTC m=+1317.342283554" Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.466351 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc6a13f9-584c-4f07-b070-c79e4f585c4f","Type":"ContainerStarted","Data":"0b58dda2bff100e25b603791777442cc1618a1131e6e2f424b632cda143a9fc2"} Dec 12 04:55:26 crc kubenswrapper[4796]: I1212 04:55:26.645124 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tg6gj"] Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.568116 4796 generic.go:334] "Generic (PLEG): container finished" podID="7913672c-384c-472c-89a8-0d546f345a28" containerID="d9a41c51a02fbfc7b83df3206185605b2f04324f46c56506c9aab25a48af1d31" exitCode=137 Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.568200 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerDied","Data":"d9a41c51a02fbfc7b83df3206185605b2f04324f46c56506c9aab25a48af1d31"} Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.568786 4796 scope.go:117] "RemoveContainer" containerID="7ee4b76a2712ab615b271101c7888ecca69c8d06360d3dab11046c4fb8bfb928" Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.589170 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" event={"ID":"b3077f41-167d-414c-9af4-05a03c32ab03","Type":"ContainerStarted","Data":"17fbdfc55ec1b1878b7115898e6c9c04f2fddd2b6e52a7c1fe53c85102fc1cf1"} Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.589236 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" event={"ID":"b3077f41-167d-414c-9af4-05a03c32ab03","Type":"ContainerStarted","Data":"b2995691ef5a0d6e80d80032f3e050da847153eec8fa6ef727ed91418aae50d1"} Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.601086 4796 generic.go:334] "Generic (PLEG): container finished" podID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerID="b50dcc280f6394fbab162fdb44d787620aad63c8ea6483a45866f68fc3afb35a" exitCode=137 Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.601145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerDied","Data":"b50dcc280f6394fbab162fdb44d787620aad63c8ea6483a45866f68fc3afb35a"} Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.603670 4796 generic.go:334] "Generic (PLEG): container finished" podID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerID="c8fd3c37e0ab1e706a4aca597d1522d560a3ef7864745f454f08a894cf06089e" exitCode=0 Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.605076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" event={"ID":"9580c7ba-bc82-4bbb-b14b-d5d527390627","Type":"ContainerDied","Data":"c8fd3c37e0ab1e706a4aca597d1522d560a3ef7864745f454f08a894cf06089e"} Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.671518 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" podStartSLOduration=2.671502016 podStartE2EDuration="2.671502016s" podCreationTimestamp="2025-12-12 04:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:27.618921226 +0000 UTC m=+1318.494938373" watchObservedRunningTime="2025-12-12 04:55:27.671502016 +0000 UTC m=+1318.547519153" Dec 12 04:55:27 crc kubenswrapper[4796]: I1212 04:55:27.866325 4796 scope.go:117] "RemoveContainer" containerID="70b9c9eddbf4a440dcf231af081331ddd22ee3f9a6479629ac84e9ef933ac6f0" Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.454556 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.509159 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.618624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" event={"ID":"9580c7ba-bc82-4bbb-b14b-d5d527390627","Type":"ContainerStarted","Data":"57c51918b416e6698eb61899272ca25b3c23af5039836958e4e1276f308965e2"} Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.619763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.629114 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerStarted","Data":"40d4f2befd8046441735f846551f6cece578cad2c1c729b3a48374abe66a2e92"} Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.637188 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerStarted","Data":"27ffa7fc276a9c228093d346b21bec0cd22db41b04cca501cb5fd0a4340fbf3c"} Dec 12 04:55:28 crc kubenswrapper[4796]: I1212 04:55:28.643922 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" podStartSLOduration=4.643904533 podStartE2EDuration="4.643904533s" podCreationTimestamp="2025-12-12 04:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:28.643880192 +0000 UTC m=+1319.519897339" watchObservedRunningTime="2025-12-12 04:55:28.643904533 +0000 UTC m=+1319.519921680" Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.684192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d799207-c66a-4dba-b3f7-1b1ef0a7c339","Type":"ContainerStarted","Data":"7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612"} Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.691861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08f981fe-4957-44c0-86f6-53f08c41b746","Type":"ContainerStarted","Data":"19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c"} Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.691944 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="08f981fe-4957-44c0-86f6-53f08c41b746" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c" gracePeriod=30 Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.698456 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"061fe450-1581-4362-968e-59b480875649","Type":"ContainerStarted","Data":"710edf57133f0e497d7571561466f0f22f831bed422bf6175fe63b2631537d5a"} Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.698500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"061fe450-1581-4362-968e-59b480875649","Type":"ContainerStarted","Data":"7755798d73a4cf9578b6086072ee8f43ece4d5b8f8d3e4cda0bfafcb9c779c05"} Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.703493 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc6a13f9-584c-4f07-b070-c79e4f585c4f","Type":"ContainerStarted","Data":"be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c"} Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.703535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc6a13f9-584c-4f07-b070-c79e4f585c4f","Type":"ContainerStarted","Data":"4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644"} Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.703640 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-log" containerID="cri-o://4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644" gracePeriod=30 Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.703652 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-metadata" containerID="cri-o://be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c" gracePeriod=30 Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.728560 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.663958203 podStartE2EDuration="8.728543591s" podCreationTimestamp="2025-12-12 04:55:24 +0000 UTC" firstStartedPulling="2025-12-12 04:55:25.804892703 +0000 UTC m=+1316.680909850" lastFinishedPulling="2025-12-12 04:55:31.869478091 +0000 UTC m=+1322.745495238" observedRunningTime="2025-12-12 04:55:32.721411458 +0000 UTC m=+1323.597428625" watchObservedRunningTime="2025-12-12 04:55:32.728543591 +0000 UTC m=+1323.604560738" Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.744055 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.857097358 podStartE2EDuration="8.744032714s" podCreationTimestamp="2025-12-12 04:55:24 +0000 UTC" firstStartedPulling="2025-12-12 04:55:25.999887636 +0000 UTC m=+1316.875904783" lastFinishedPulling="2025-12-12 04:55:31.886822992 +0000 UTC m=+1322.762840139" observedRunningTime="2025-12-12 04:55:32.740414841 +0000 UTC m=+1323.616432008" watchObservedRunningTime="2025-12-12 04:55:32.744032714 +0000 UTC m=+1323.620049861" Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.762115 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.601532856 podStartE2EDuration="8.762080208s" podCreationTimestamp="2025-12-12 04:55:24 +0000 UTC" firstStartedPulling="2025-12-12 04:55:25.707903097 +0000 UTC m=+1316.583920244" lastFinishedPulling="2025-12-12 04:55:31.868450449 +0000 UTC m=+1322.744467596" observedRunningTime="2025-12-12 04:55:32.758224227 +0000 UTC m=+1323.634241374" watchObservedRunningTime="2025-12-12 04:55:32.762080208 +0000 UTC m=+1323.638097355" Dec 12 04:55:32 crc kubenswrapper[4796]: I1212 04:55:32.800822 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.923392468 podStartE2EDuration="8.800804436s" podCreationTimestamp="2025-12-12 04:55:24 +0000 UTC" firstStartedPulling="2025-12-12 04:55:25.992873268 +0000 UTC m=+1316.868890415" lastFinishedPulling="2025-12-12 04:55:31.870285226 +0000 UTC m=+1322.746302383" observedRunningTime="2025-12-12 04:55:32.783349201 +0000 UTC m=+1323.659366368" watchObservedRunningTime="2025-12-12 04:55:32.800804436 +0000 UTC m=+1323.676821573" Dec 12 04:55:33 crc kubenswrapper[4796]: I1212 04:55:33.747171 4796 generic.go:334] "Generic (PLEG): container finished" podID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerID="4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644" exitCode=143 Dec 12 04:55:33 crc kubenswrapper[4796]: I1212 04:55:33.747274 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc6a13f9-584c-4f07-b070-c79e4f585c4f","Type":"ContainerDied","Data":"4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644"} Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.216769 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.216986 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" containerName="kube-state-metrics" containerID="cri-o://5c960d282b73ccaf613f27942a36340b790558379e317bf72e18c3caea1f5d5e" gracePeriod=30 Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.760808 4796 generic.go:334] "Generic (PLEG): container finished" podID="3857d6ad-7515-4600-8e29-a5e3182f5253" containerID="5c960d282b73ccaf613f27942a36340b790558379e317bf72e18c3caea1f5d5e" exitCode=2 Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.760999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3857d6ad-7515-4600-8e29-a5e3182f5253","Type":"ContainerDied","Data":"5c960d282b73ccaf613f27942a36340b790558379e317bf72e18c3caea1f5d5e"} Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.866461 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.878861 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.885860 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.885898 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.955180 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.966424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l69x2\" (UniqueName: \"kubernetes.io/projected/3857d6ad-7515-4600-8e29-a5e3182f5253-kube-api-access-l69x2\") pod \"3857d6ad-7515-4600-8e29-a5e3182f5253\" (UID: \"3857d6ad-7515-4600-8e29-a5e3182f5253\") " Dec 12 04:55:34 crc kubenswrapper[4796]: I1212 04:55:34.981486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3857d6ad-7515-4600-8e29-a5e3182f5253-kube-api-access-l69x2" (OuterVolumeSpecName: "kube-api-access-l69x2") pod "3857d6ad-7515-4600-8e29-a5e3182f5253" (UID: "3857d6ad-7515-4600-8e29-a5e3182f5253"). InnerVolumeSpecName "kube-api-access-l69x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.068302 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l69x2\" (UniqueName: \"kubernetes.io/projected/3857d6ad-7515-4600-8e29-a5e3182f5253-kube-api-access-l69x2\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.168449 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.168509 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.213209 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.213262 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.302500 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.385767 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gg9mc"] Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.385973 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" containerName="dnsmasq-dns" containerID="cri-o://3e2595988077f516c051fa6033e3e6fe03bee4de665a43d4b8a9143013218cd8" gracePeriod=10 Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.806553 4796 generic.go:334] "Generic (PLEG): container finished" podID="1ae63b03-d161-4745-9912-afab23ec6f09" containerID="3e2595988077f516c051fa6033e3e6fe03bee4de665a43d4b8a9143013218cd8" exitCode=0 Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.806653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" event={"ID":"1ae63b03-d161-4745-9912-afab23ec6f09","Type":"ContainerDied","Data":"3e2595988077f516c051fa6033e3e6fe03bee4de665a43d4b8a9143013218cd8"} Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.812383 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.812792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3857d6ad-7515-4600-8e29-a5e3182f5253","Type":"ContainerDied","Data":"2d84d331452e06d1b7fc471ddb852799c3a40cae241a21d742e9d05665994aea"} Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.812834 4796 scope.go:117] "RemoveContainer" containerID="5c960d282b73ccaf613f27942a36340b790558379e317bf72e18c3caea1f5d5e" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.863590 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.898146 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.979773 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.980956 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:55:35 crc kubenswrapper[4796]: E1212 04:55:35.996306 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" containerName="kube-state-metrics" Dec 12 04:55:35 crc kubenswrapper[4796]: I1212 04:55:35.996330 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" containerName="kube-state-metrics" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.007269 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" containerName="kube-state-metrics" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.007935 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.008580 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.013801 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.014197 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.062114 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092095 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-config\") pod \"1ae63b03-d161-4745-9912-afab23ec6f09\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092145 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-sb\") pod \"1ae63b03-d161-4745-9912-afab23ec6f09\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-swift-storage-0\") pod \"1ae63b03-d161-4745-9912-afab23ec6f09\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092252 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvp4\" (UniqueName: \"kubernetes.io/projected/1ae63b03-d161-4745-9912-afab23ec6f09-kube-api-access-xlvp4\") pod \"1ae63b03-d161-4745-9912-afab23ec6f09\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-nb\") pod \"1ae63b03-d161-4745-9912-afab23ec6f09\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092388 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-svc\") pod \"1ae63b03-d161-4745-9912-afab23ec6f09\" (UID: \"1ae63b03-d161-4745-9912-afab23ec6f09\") " Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092616 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzns\" (UniqueName: \"kubernetes.io/projected/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-api-access-4rzns\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092723 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.092775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.119799 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae63b03-d161-4745-9912-afab23ec6f09-kube-api-access-xlvp4" (OuterVolumeSpecName: "kube-api-access-xlvp4") pod "1ae63b03-d161-4745-9912-afab23ec6f09" (UID: "1ae63b03-d161-4745-9912-afab23ec6f09"). InnerVolumeSpecName "kube-api-access-xlvp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.195153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.195219 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzns\" (UniqueName: \"kubernetes.io/projected/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-api-access-4rzns\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.195282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.195343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.195419 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlvp4\" (UniqueName: \"kubernetes.io/projected/1ae63b03-d161-4745-9912-afab23ec6f09-kube-api-access-xlvp4\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.211880 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.212261 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ae63b03-d161-4745-9912-afab23ec6f09" (UID: "1ae63b03-d161-4745-9912-afab23ec6f09"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.213801 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.214857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.223673 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzns\" (UniqueName: \"kubernetes.io/projected/416b9e99-eb64-4c24-9c32-0fb5bc210a2a-kube-api-access-4rzns\") pod \"kube-state-metrics-0\" (UID: \"416b9e99-eb64-4c24-9c32-0fb5bc210a2a\") " pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.228676 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ae63b03-d161-4745-9912-afab23ec6f09" (UID: "1ae63b03-d161-4745-9912-afab23ec6f09"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.241734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ae63b03-d161-4745-9912-afab23ec6f09" (UID: "1ae63b03-d161-4745-9912-afab23ec6f09"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.262792 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.263411 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.265037 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ae63b03-d161-4745-9912-afab23ec6f09" (UID: "1ae63b03-d161-4745-9912-afab23ec6f09"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.286556 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-config" (OuterVolumeSpecName: "config") pod "1ae63b03-d161-4745-9912-afab23ec6f09" (UID: "1ae63b03-d161-4745-9912-afab23ec6f09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.297499 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.297535 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.297547 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.297557 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.297568 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ae63b03-d161-4745-9912-afab23ec6f09-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.348056 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.821711 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" event={"ID":"1ae63b03-d161-4745-9912-afab23ec6f09","Type":"ContainerDied","Data":"13aeafce32de65f886ae72adf52cdc38050d220fdc3af5cb451522dbdad422ce"} Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.822051 4796 scope.go:117] "RemoveContainer" containerID="3e2595988077f516c051fa6033e3e6fe03bee4de665a43d4b8a9143013218cd8" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.822196 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gg9mc" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.867406 4796 scope.go:117] "RemoveContainer" containerID="df4b693b8018f4695f716f1cd278e2d1de4026c737fdc1b6e20157375a0a7fdb" Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.874343 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gg9mc"] Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.890094 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gg9mc"] Dec 12 04:55:36 crc kubenswrapper[4796]: I1212 04:55:36.977166 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.018669 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.018834 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.019689 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.094772 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.094833 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.096440 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.423544 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" path="/var/lib/kubelet/pods/1ae63b03-d161-4745-9912-afab23ec6f09/volumes" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.424228 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3857d6ad-7515-4600-8e29-a5e3182f5253" path="/var/lib/kubelet/pods/3857d6ad-7515-4600-8e29-a5e3182f5253/volumes" Dec 12 04:55:37 crc kubenswrapper[4796]: I1212 04:55:37.835845 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"416b9e99-eb64-4c24-9c32-0fb5bc210a2a","Type":"ContainerStarted","Data":"80f06c4b956282aa973877aee48ca58cdbc129f932a70603e9c97ac50b50c10a"} Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.837829 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.838574 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="sg-core" containerID="cri-o://3715d5366680c08a1a86b4f804ea4cd6e55e045fcd9d445a031dd4a27a9b277d" gracePeriod=30 Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.838617 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-notification-agent" containerID="cri-o://8e796d734b40e4a2c357b550d387f6681692face2bf7bf3def83a38ca770c746" gracePeriod=30 Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.838574 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="proxy-httpd" containerID="cri-o://ae8a93c58a70b355f2b6c7d19d5c1270fdf2e094b9c337b7f619c585eb84d80a" gracePeriod=30 Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.838451 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-central-agent" containerID="cri-o://526cf792cac3c81fc5a4091ca508881c8e7ae10518434e453b9fbdfc72e4311c" gracePeriod=30 Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.858772 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"416b9e99-eb64-4c24-9c32-0fb5bc210a2a","Type":"ContainerStarted","Data":"715e6fb3319459e88c071893815daaafa572d26ba302842804aa4f76e9ec3d71"} Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.859545 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 12 04:55:38 crc kubenswrapper[4796]: I1212 04:55:38.888584 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.496428093 podStartE2EDuration="3.888567056s" podCreationTimestamp="2025-12-12 04:55:35 +0000 UTC" firstStartedPulling="2025-12-12 04:55:37.034941108 +0000 UTC m=+1327.910958255" lastFinishedPulling="2025-12-12 04:55:37.427080081 +0000 UTC m=+1328.303097218" observedRunningTime="2025-12-12 04:55:38.882798726 +0000 UTC m=+1329.758815873" watchObservedRunningTime="2025-12-12 04:55:38.888567056 +0000 UTC m=+1329.764584203" Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.925214 4796 generic.go:334] "Generic (PLEG): container finished" podID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerID="ae8a93c58a70b355f2b6c7d19d5c1270fdf2e094b9c337b7f619c585eb84d80a" exitCode=0 Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.925549 4796 generic.go:334] "Generic (PLEG): container finished" podID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerID="3715d5366680c08a1a86b4f804ea4cd6e55e045fcd9d445a031dd4a27a9b277d" exitCode=2 Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.925588 4796 generic.go:334] "Generic (PLEG): container finished" podID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerID="8e796d734b40e4a2c357b550d387f6681692face2bf7bf3def83a38ca770c746" exitCode=0 Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.925597 4796 generic.go:334] "Generic (PLEG): container finished" podID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerID="526cf792cac3c81fc5a4091ca508881c8e7ae10518434e453b9fbdfc72e4311c" exitCode=0 Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.925329 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerDied","Data":"ae8a93c58a70b355f2b6c7d19d5c1270fdf2e094b9c337b7f619c585eb84d80a"} Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.926455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerDied","Data":"3715d5366680c08a1a86b4f804ea4cd6e55e045fcd9d445a031dd4a27a9b277d"} Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.926483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerDied","Data":"8e796d734b40e4a2c357b550d387f6681692face2bf7bf3def83a38ca770c746"} Dec 12 04:55:39 crc kubenswrapper[4796]: I1212 04:55:39.926492 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerDied","Data":"526cf792cac3c81fc5a4091ca508881c8e7ae10518434e453b9fbdfc72e4311c"} Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.156213 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282095 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-log-httpd\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282165 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-sg-core-conf-yaml\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282203 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-combined-ca-bundle\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282249 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlvz7\" (UniqueName: \"kubernetes.io/projected/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-kube-api-access-dlvz7\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-run-httpd\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282396 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-scripts\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.282430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-config-data\") pod \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\" (UID: \"acf5cec4-54d0-4c21-8b0a-e03ac14d2218\") " Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.283429 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.283680 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.289666 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-kube-api-access-dlvz7" (OuterVolumeSpecName: "kube-api-access-dlvz7") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "kube-api-access-dlvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.292392 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-scripts" (OuterVolumeSpecName: "scripts") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.384376 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.384398 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.384409 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.384418 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlvz7\" (UniqueName: \"kubernetes.io/projected/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-kube-api-access-dlvz7\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.384858 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.410127 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-config-data" (OuterVolumeSpecName: "config-data") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.427800 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acf5cec4-54d0-4c21-8b0a-e03ac14d2218" (UID: "acf5cec4-54d0-4c21-8b0a-e03ac14d2218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.487022 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.487057 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.487070 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5cec4-54d0-4c21-8b0a-e03ac14d2218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.936624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acf5cec4-54d0-4c21-8b0a-e03ac14d2218","Type":"ContainerDied","Data":"e5823174e3c66470b89569349297124e2fc23816e10ca248129fccaa40c284ec"} Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.936942 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.936959 4796 scope.go:117] "RemoveContainer" containerID="ae8a93c58a70b355f2b6c7d19d5c1270fdf2e094b9c337b7f619c585eb84d80a" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.939212 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3077f41-167d-414c-9af4-05a03c32ab03" containerID="17fbdfc55ec1b1878b7115898e6c9c04f2fddd2b6e52a7c1fe53c85102fc1cf1" exitCode=0 Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.939248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" event={"ID":"b3077f41-167d-414c-9af4-05a03c32ab03","Type":"ContainerDied","Data":"17fbdfc55ec1b1878b7115898e6c9c04f2fddd2b6e52a7c1fe53c85102fc1cf1"} Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.989343 4796 scope.go:117] "RemoveContainer" containerID="3715d5366680c08a1a86b4f804ea4cd6e55e045fcd9d445a031dd4a27a9b277d" Dec 12 04:55:40 crc kubenswrapper[4796]: I1212 04:55:40.994681 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.011342 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021395 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:41 crc kubenswrapper[4796]: E1212 04:55:41.021847 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" containerName="init" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021865 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" containerName="init" Dec 12 04:55:41 crc kubenswrapper[4796]: E1212 04:55:41.021885 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-central-agent" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021892 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-central-agent" Dec 12 04:55:41 crc kubenswrapper[4796]: E1212 04:55:41.021912 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="proxy-httpd" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021918 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="proxy-httpd" Dec 12 04:55:41 crc kubenswrapper[4796]: E1212 04:55:41.021927 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" containerName="dnsmasq-dns" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021933 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" containerName="dnsmasq-dns" Dec 12 04:55:41 crc kubenswrapper[4796]: E1212 04:55:41.021943 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="sg-core" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021949 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="sg-core" Dec 12 04:55:41 crc kubenswrapper[4796]: E1212 04:55:41.021961 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-notification-agent" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.021967 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-notification-agent" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.022850 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-central-agent" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.022874 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae63b03-d161-4745-9912-afab23ec6f09" containerName="dnsmasq-dns" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.022890 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="ceilometer-notification-agent" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.022902 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="proxy-httpd" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.022911 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" containerName="sg-core" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.024550 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.032849 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.033133 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.033417 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.039869 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.066428 4796 scope.go:117] "RemoveContainer" containerID="8e796d734b40e4a2c357b550d387f6681692face2bf7bf3def83a38ca770c746" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101266 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-config-data\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101316 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-run-httpd\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101346 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4688x\" (UniqueName: \"kubernetes.io/projected/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-kube-api-access-4688x\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101458 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-log-httpd\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101572 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101589 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.101618 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-scripts\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.141511 4796 scope.go:117] "RemoveContainer" containerID="526cf792cac3c81fc5a4091ca508881c8e7ae10518434e453b9fbdfc72e4311c" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.203796 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4688x\" (UniqueName: \"kubernetes.io/projected/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-kube-api-access-4688x\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.203895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.203920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-log-httpd\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.203991 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.204007 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.204040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-scripts\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.204076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-config-data\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.204092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-run-httpd\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.204449 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-log-httpd\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.204503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-run-httpd\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.209090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.213836 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.218771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.223201 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-config-data\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.227705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4688x\" (UniqueName: \"kubernetes.io/projected/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-kube-api-access-4688x\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.246374 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-scripts\") pod \"ceilometer-0\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.424663 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf5cec4-54d0-4c21-8b0a-e03ac14d2218" path="/var/lib/kubelet/pods/acf5cec4-54d0-4c21-8b0a-e03ac14d2218/volumes" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.432670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.949586 4796 generic.go:334] "Generic (PLEG): container finished" podID="0197fef8-4748-4ef6-a3cd-b038975d8882" containerID="5027cd938388feeef5402e43196b5e72000d5149d7a5ddd740a607f2331d72ca" exitCode=0 Dec 12 04:55:41 crc kubenswrapper[4796]: I1212 04:55:41.949918 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jt82w" event={"ID":"0197fef8-4748-4ef6-a3cd-b038975d8882","Type":"ContainerDied","Data":"5027cd938388feeef5402e43196b5e72000d5149d7a5ddd740a607f2331d72ca"} Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.018537 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:55:42 crc kubenswrapper[4796]: W1212 04:55:42.024190 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87bea30e_9e9f_4f74_8619_b09b6fc7b6d3.slice/crio-0753153a3bd410a16a0e14d1f9d652128b364d407c07b17f45e822b0c5e32bb7 WatchSource:0}: Error finding container 0753153a3bd410a16a0e14d1f9d652128b364d407c07b17f45e822b0c5e32bb7: Status 404 returned error can't find the container with id 0753153a3bd410a16a0e14d1f9d652128b364d407c07b17f45e822b0c5e32bb7 Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.372506 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.425177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-combined-ca-bundle\") pod \"b3077f41-167d-414c-9af4-05a03c32ab03\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.425366 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8498\" (UniqueName: \"kubernetes.io/projected/b3077f41-167d-414c-9af4-05a03c32ab03-kube-api-access-h8498\") pod \"b3077f41-167d-414c-9af4-05a03c32ab03\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.425431 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-scripts\") pod \"b3077f41-167d-414c-9af4-05a03c32ab03\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.425511 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-config-data\") pod \"b3077f41-167d-414c-9af4-05a03c32ab03\" (UID: \"b3077f41-167d-414c-9af4-05a03c32ab03\") " Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.433467 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3077f41-167d-414c-9af4-05a03c32ab03-kube-api-access-h8498" (OuterVolumeSpecName: "kube-api-access-h8498") pod "b3077f41-167d-414c-9af4-05a03c32ab03" (UID: "b3077f41-167d-414c-9af4-05a03c32ab03"). InnerVolumeSpecName "kube-api-access-h8498". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.452491 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-scripts" (OuterVolumeSpecName: "scripts") pod "b3077f41-167d-414c-9af4-05a03c32ab03" (UID: "b3077f41-167d-414c-9af4-05a03c32ab03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.454707 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3077f41-167d-414c-9af4-05a03c32ab03" (UID: "b3077f41-167d-414c-9af4-05a03c32ab03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.460374 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-config-data" (OuterVolumeSpecName: "config-data") pod "b3077f41-167d-414c-9af4-05a03c32ab03" (UID: "b3077f41-167d-414c-9af4-05a03c32ab03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.527377 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8498\" (UniqueName: \"kubernetes.io/projected/b3077f41-167d-414c-9af4-05a03c32ab03-kube-api-access-h8498\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.527423 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.527433 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.527442 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3077f41-167d-414c-9af4-05a03c32ab03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.961915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" event={"ID":"b3077f41-167d-414c-9af4-05a03c32ab03","Type":"ContainerDied","Data":"b2995691ef5a0d6e80d80032f3e050da847153eec8fa6ef727ed91418aae50d1"} Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.961964 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2995691ef5a0d6e80d80032f3e050da847153eec8fa6ef727ed91418aae50d1" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.962047 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tg6gj" Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.981954 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerStarted","Data":"c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432"} Dec 12 04:55:42 crc kubenswrapper[4796]: I1212 04:55:42.982009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerStarted","Data":"0753153a3bd410a16a0e14d1f9d652128b364d407c07b17f45e822b0c5e32bb7"} Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.054338 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 12 04:55:43 crc kubenswrapper[4796]: E1212 04:55:43.054817 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3077f41-167d-414c-9af4-05a03c32ab03" containerName="nova-cell1-conductor-db-sync" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.054834 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3077f41-167d-414c-9af4-05a03c32ab03" containerName="nova-cell1-conductor-db-sync" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.055013 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3077f41-167d-414c-9af4-05a03c32ab03" containerName="nova-cell1-conductor-db-sync" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.055663 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.063184 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.102343 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.140836 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.140907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.140977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxh9b\" (UniqueName: \"kubernetes.io/projected/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-kube-api-access-sxh9b\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.242672 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.242726 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.242787 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxh9b\" (UniqueName: \"kubernetes.io/projected/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-kube-api-access-sxh9b\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.247548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.247596 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.264577 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxh9b\" (UniqueName: \"kubernetes.io/projected/e91e5c81-6ced-4f8f-b7ba-c40f35e989ca-kube-api-access-sxh9b\") pod \"nova-cell1-conductor-0\" (UID: \"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca\") " pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.433600 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.505642 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.665220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-config-data\") pod \"0197fef8-4748-4ef6-a3cd-b038975d8882\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.666515 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-scripts\") pod \"0197fef8-4748-4ef6-a3cd-b038975d8882\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.666629 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-combined-ca-bundle\") pod \"0197fef8-4748-4ef6-a3cd-b038975d8882\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.666827 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbfxn\" (UniqueName: \"kubernetes.io/projected/0197fef8-4748-4ef6-a3cd-b038975d8882-kube-api-access-xbfxn\") pod \"0197fef8-4748-4ef6-a3cd-b038975d8882\" (UID: \"0197fef8-4748-4ef6-a3cd-b038975d8882\") " Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.682000 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-scripts" (OuterVolumeSpecName: "scripts") pod "0197fef8-4748-4ef6-a3cd-b038975d8882" (UID: "0197fef8-4748-4ef6-a3cd-b038975d8882"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.688488 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0197fef8-4748-4ef6-a3cd-b038975d8882-kube-api-access-xbfxn" (OuterVolumeSpecName: "kube-api-access-xbfxn") pod "0197fef8-4748-4ef6-a3cd-b038975d8882" (UID: "0197fef8-4748-4ef6-a3cd-b038975d8882"). InnerVolumeSpecName "kube-api-access-xbfxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.722904 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0197fef8-4748-4ef6-a3cd-b038975d8882" (UID: "0197fef8-4748-4ef6-a3cd-b038975d8882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.733525 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-config-data" (OuterVolumeSpecName: "config-data") pod "0197fef8-4748-4ef6-a3cd-b038975d8882" (UID: "0197fef8-4748-4ef6-a3cd-b038975d8882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.768741 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.768767 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.768778 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbfxn\" (UniqueName: \"kubernetes.io/projected/0197fef8-4748-4ef6-a3cd-b038975d8882-kube-api-access-xbfxn\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.768787 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0197fef8-4748-4ef6-a3cd-b038975d8882-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.992002 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jt82w" event={"ID":"0197fef8-4748-4ef6-a3cd-b038975d8882","Type":"ContainerDied","Data":"cfd259be08a8cf54bc7d095c9e0d41e29f6479289c6e53c46bf80a3441661690"} Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.992053 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd259be08a8cf54bc7d095c9e0d41e29f6479289c6e53c46bf80a3441661690" Dec 12 04:55:43 crc kubenswrapper[4796]: I1212 04:55:43.992139 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jt82w" Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.009819 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerStarted","Data":"e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d"} Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.062162 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.357965 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.358688 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-api" containerID="cri-o://710edf57133f0e497d7571561466f0f22f831bed422bf6175fe63b2631537d5a" gracePeriod=30 Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.358575 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-log" containerID="cri-o://7755798d73a4cf9578b6086072ee8f43ece4d5b8f8d3e4cda0bfafcb9c779c05" gracePeriod=30 Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.390327 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:44 crc kubenswrapper[4796]: I1212 04:55:44.390532 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" containerName="nova-scheduler-scheduler" containerID="cri-o://7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" gracePeriod=30 Dec 12 04:55:44 crc kubenswrapper[4796]: E1212 04:55:44.890141 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 04:55:44 crc kubenswrapper[4796]: E1212 04:55:44.891905 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 04:55:44 crc kubenswrapper[4796]: E1212 04:55:44.902579 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 12 04:55:44 crc kubenswrapper[4796]: E1212 04:55:44.902633 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" containerName="nova-scheduler-scheduler" Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.034708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca","Type":"ContainerStarted","Data":"ab90e87d52d69e2b9f7b6cba256a5fe4026f1778b97860377a9d57f73992c0e8"} Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.034757 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e91e5c81-6ced-4f8f-b7ba-c40f35e989ca","Type":"ContainerStarted","Data":"33b660c12c101ed1ede4e8e2fbacbe6377c8027a1889f58b73f7ec7cfae98811"} Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.035897 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.037131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerStarted","Data":"04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e"} Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.042564 4796 generic.go:334] "Generic (PLEG): container finished" podID="061fe450-1581-4362-968e-59b480875649" containerID="7755798d73a4cf9578b6086072ee8f43ece4d5b8f8d3e4cda0bfafcb9c779c05" exitCode=143 Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.042621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"061fe450-1581-4362-968e-59b480875649","Type":"ContainerDied","Data":"7755798d73a4cf9578b6086072ee8f43ece4d5b8f8d3e4cda0bfafcb9c779c05"} Dec 12 04:55:45 crc kubenswrapper[4796]: I1212 04:55:45.084926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.084900904 podStartE2EDuration="2.084900904s" podCreationTimestamp="2025-12-12 04:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:45.07418671 +0000 UTC m=+1335.950203857" watchObservedRunningTime="2025-12-12 04:55:45.084900904 +0000 UTC m=+1335.960918051" Dec 12 04:55:46 crc kubenswrapper[4796]: I1212 04:55:46.052328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerStarted","Data":"abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704"} Dec 12 04:55:46 crc kubenswrapper[4796]: I1212 04:55:46.052586 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 04:55:46 crc kubenswrapper[4796]: I1212 04:55:46.378232 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 12 04:55:46 crc kubenswrapper[4796]: I1212 04:55:46.401025 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.812953076 podStartE2EDuration="6.401004572s" podCreationTimestamp="2025-12-12 04:55:40 +0000 UTC" firstStartedPulling="2025-12-12 04:55:42.02807849 +0000 UTC m=+1332.904095637" lastFinishedPulling="2025-12-12 04:55:45.616129986 +0000 UTC m=+1336.492147133" observedRunningTime="2025-12-12 04:55:46.072398251 +0000 UTC m=+1336.948415398" watchObservedRunningTime="2025-12-12 04:55:46.401004572 +0000 UTC m=+1337.277021709" Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.019343 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.099443 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.842580 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.950711 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-config-data\") pod \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.950762 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-combined-ca-bundle\") pod \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.950861 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87lw9\" (UniqueName: \"kubernetes.io/projected/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-kube-api-access-87lw9\") pod \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\" (UID: \"4d799207-c66a-4dba-b3f7-1b1ef0a7c339\") " Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.971219 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-kube-api-access-87lw9" (OuterVolumeSpecName: "kube-api-access-87lw9") pod "4d799207-c66a-4dba-b3f7-1b1ef0a7c339" (UID: "4d799207-c66a-4dba-b3f7-1b1ef0a7c339"). InnerVolumeSpecName "kube-api-access-87lw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:47 crc kubenswrapper[4796]: I1212 04:55:47.995709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d799207-c66a-4dba-b3f7-1b1ef0a7c339" (UID: "4d799207-c66a-4dba-b3f7-1b1ef0a7c339"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.024464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-config-data" (OuterVolumeSpecName: "config-data") pod "4d799207-c66a-4dba-b3f7-1b1ef0a7c339" (UID: "4d799207-c66a-4dba-b3f7-1b1ef0a7c339"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.056651 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87lw9\" (UniqueName: \"kubernetes.io/projected/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-kube-api-access-87lw9\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.056684 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.056701 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d799207-c66a-4dba-b3f7-1b1ef0a7c339-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.102607 4796 generic.go:334] "Generic (PLEG): container finished" podID="061fe450-1581-4362-968e-59b480875649" containerID="710edf57133f0e497d7571561466f0f22f831bed422bf6175fe63b2631537d5a" exitCode=0 Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.102688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"061fe450-1581-4362-968e-59b480875649","Type":"ContainerDied","Data":"710edf57133f0e497d7571561466f0f22f831bed422bf6175fe63b2631537d5a"} Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.106619 4796 generic.go:334] "Generic (PLEG): container finished" podID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" exitCode=0 Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.106654 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d799207-c66a-4dba-b3f7-1b1ef0a7c339","Type":"ContainerDied","Data":"7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612"} Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.106681 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d799207-c66a-4dba-b3f7-1b1ef0a7c339","Type":"ContainerDied","Data":"361423ff39b7e92ee5ecda119478e9d1fe90791f55bd3b1776509713a4e1b09f"} Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.106711 4796 scope.go:117] "RemoveContainer" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.106869 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.212908 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.230068 4796 scope.go:117] "RemoveContainer" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.231782 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:48 crc kubenswrapper[4796]: E1212 04:55:48.232175 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612\": container with ID starting with 7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612 not found: ID does not exist" containerID="7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.232213 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612"} err="failed to get container status \"7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612\": rpc error: code = NotFound desc = could not find container \"7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612\": container with ID starting with 7f25b3f431b28499b92b7250c077560dfca32d8d9defa2f74e61e96343f80612 not found: ID does not exist" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.240316 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:48 crc kubenswrapper[4796]: E1212 04:55:48.240886 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" containerName="nova-scheduler-scheduler" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.240951 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" containerName="nova-scheduler-scheduler" Dec 12 04:55:48 crc kubenswrapper[4796]: E1212 04:55:48.241047 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0197fef8-4748-4ef6-a3cd-b038975d8882" containerName="nova-manage" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.241098 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0197fef8-4748-4ef6-a3cd-b038975d8882" containerName="nova-manage" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.241356 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" containerName="nova-scheduler-scheduler" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.241473 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0197fef8-4748-4ef6-a3cd-b038975d8882" containerName="nova-manage" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.242534 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.249237 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.258191 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.262933 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.371562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061fe450-1581-4362-968e-59b480875649-logs\") pod \"061fe450-1581-4362-968e-59b480875649\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.371939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr8bl\" (UniqueName: \"kubernetes.io/projected/061fe450-1581-4362-968e-59b480875649-kube-api-access-vr8bl\") pod \"061fe450-1581-4362-968e-59b480875649\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.372725 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061fe450-1581-4362-968e-59b480875649-logs" (OuterVolumeSpecName: "logs") pod "061fe450-1581-4362-968e-59b480875649" (UID: "061fe450-1581-4362-968e-59b480875649"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.372782 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-combined-ca-bundle\") pod \"061fe450-1581-4362-968e-59b480875649\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.372817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-config-data\") pod \"061fe450-1581-4362-968e-59b480875649\" (UID: \"061fe450-1581-4362-968e-59b480875649\") " Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.373154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.373243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-config-data\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.373319 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltvb\" (UniqueName: \"kubernetes.io/projected/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-kube-api-access-fltvb\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.374293 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061fe450-1581-4362-968e-59b480875649-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.386401 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061fe450-1581-4362-968e-59b480875649-kube-api-access-vr8bl" (OuterVolumeSpecName: "kube-api-access-vr8bl") pod "061fe450-1581-4362-968e-59b480875649" (UID: "061fe450-1581-4362-968e-59b480875649"). InnerVolumeSpecName "kube-api-access-vr8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.439421 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "061fe450-1581-4362-968e-59b480875649" (UID: "061fe450-1581-4362-968e-59b480875649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.460219 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-config-data" (OuterVolumeSpecName: "config-data") pod "061fe450-1581-4362-968e-59b480875649" (UID: "061fe450-1581-4362-968e-59b480875649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.476826 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.477166 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-config-data\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.477220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltvb\" (UniqueName: \"kubernetes.io/projected/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-kube-api-access-fltvb\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.477299 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr8bl\" (UniqueName: \"kubernetes.io/projected/061fe450-1581-4362-968e-59b480875649-kube-api-access-vr8bl\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.477310 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.477319 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061fe450-1581-4362-968e-59b480875649-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.484168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.488138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-config-data\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.502653 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltvb\" (UniqueName: \"kubernetes.io/projected/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-kube-api-access-fltvb\") pod \"nova-scheduler-0\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " pod="openstack/nova-scheduler-0" Dec 12 04:55:48 crc kubenswrapper[4796]: I1212 04:55:48.581416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.090339 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.120144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"061fe450-1581-4362-968e-59b480875649","Type":"ContainerDied","Data":"c094763910e23ab8d3cc0b14026c1adac74bc35a895c2492f8149bde67734b49"} Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.120152 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.120208 4796 scope.go:117] "RemoveContainer" containerID="710edf57133f0e497d7571561466f0f22f831bed422bf6175fe63b2631537d5a" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.122375 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7","Type":"ContainerStarted","Data":"2ac00d1ddd93828008cae6ef5e7197bd7cca791b85127c0b97ddd3eb597a5164"} Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.148458 4796 scope.go:117] "RemoveContainer" containerID="7755798d73a4cf9578b6086072ee8f43ece4d5b8f8d3e4cda0bfafcb9c779c05" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.175253 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.186863 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.200936 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:49 crc kubenswrapper[4796]: E1212 04:55:49.201356 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-log" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.201372 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-log" Dec 12 04:55:49 crc kubenswrapper[4796]: E1212 04:55:49.201404 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-api" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.201409 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-api" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.201575 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-api" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.201593 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="061fe450-1581-4362-968e-59b480875649" containerName="nova-api-log" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.202723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.207633 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.237784 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.295835 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35b619b-50f3-4af0-b515-83931d780694-logs\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.296203 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.296265 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-config-data\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.296313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89p99\" (UniqueName: \"kubernetes.io/projected/a35b619b-50f3-4af0-b515-83931d780694-kube-api-access-89p99\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.398197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35b619b-50f3-4af0-b515-83931d780694-logs\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.398318 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.398343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-config-data\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.398371 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89p99\" (UniqueName: \"kubernetes.io/projected/a35b619b-50f3-4af0-b515-83931d780694-kube-api-access-89p99\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.399072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35b619b-50f3-4af0-b515-83931d780694-logs\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.405086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-config-data\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.411496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.424426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89p99\" (UniqueName: \"kubernetes.io/projected/a35b619b-50f3-4af0-b515-83931d780694-kube-api-access-89p99\") pod \"nova-api-0\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.424818 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061fe450-1581-4362-968e-59b480875649" path="/var/lib/kubelet/pods/061fe450-1581-4362-968e-59b480875649/volumes" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.425555 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d799207-c66a-4dba-b3f7-1b1ef0a7c339" path="/var/lib/kubelet/pods/4d799207-c66a-4dba-b3f7-1b1ef0a7c339/volumes" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.522874 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:55:49 crc kubenswrapper[4796]: I1212 04:55:49.954847 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:55:50 crc kubenswrapper[4796]: I1212 04:55:50.138809 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35b619b-50f3-4af0-b515-83931d780694","Type":"ContainerStarted","Data":"d05f7097069d3e2b0b24ceabd4e65efb7ecb4a84df71ea0588395ed59910e8ef"} Dec 12 04:55:50 crc kubenswrapper[4796]: I1212 04:55:50.144062 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7","Type":"ContainerStarted","Data":"5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993"} Dec 12 04:55:50 crc kubenswrapper[4796]: I1212 04:55:50.181115 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.181093821 podStartE2EDuration="2.181093821s" podCreationTimestamp="2025-12-12 04:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:50.164007808 +0000 UTC m=+1341.040024965" watchObservedRunningTime="2025-12-12 04:55:50.181093821 +0000 UTC m=+1341.057110968" Dec 12 04:55:51 crc kubenswrapper[4796]: I1212 04:55:51.155090 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35b619b-50f3-4af0-b515-83931d780694","Type":"ContainerStarted","Data":"669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63"} Dec 12 04:55:51 crc kubenswrapper[4796]: I1212 04:55:51.155479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35b619b-50f3-4af0-b515-83931d780694","Type":"ContainerStarted","Data":"a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3"} Dec 12 04:55:53 crc kubenswrapper[4796]: I1212 04:55:53.461776 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 12 04:55:53 crc kubenswrapper[4796]: I1212 04:55:53.503933 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.503910083 podStartE2EDuration="4.503910083s" podCreationTimestamp="2025-12-12 04:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:55:51.17380451 +0000 UTC m=+1342.049821647" watchObservedRunningTime="2025-12-12 04:55:53.503910083 +0000 UTC m=+1344.379927230" Dec 12 04:55:53 crc kubenswrapper[4796]: I1212 04:55:53.582773 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.018481 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.018902 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.019902 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"40d4f2befd8046441735f846551f6cece578cad2c1c729b3a48374abe66a2e92"} pod="openstack/horizon-6cb55bccb4-z8p6q" containerMessage="Container horizon failed startup probe, will be restarted" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.020214 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" containerID="cri-o://40d4f2befd8046441735f846551f6cece578cad2c1c729b3a48374abe66a2e92" gracePeriod=30 Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.112587 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.112664 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.113449 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"27ffa7fc276a9c228093d346b21bec0cd22db41b04cca501cb5fd0a4340fbf3c"} pod="openstack/horizon-67764d6b9b-h7fdk" containerMessage="Container horizon failed startup probe, will be restarted" Dec 12 04:55:57 crc kubenswrapper[4796]: I1212 04:55:57.113495 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" containerID="cri-o://27ffa7fc276a9c228093d346b21bec0cd22db41b04cca501cb5fd0a4340fbf3c" gracePeriod=30 Dec 12 04:55:58 crc kubenswrapper[4796]: I1212 04:55:58.582481 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 12 04:55:58 crc kubenswrapper[4796]: I1212 04:55:58.620345 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 12 04:55:59 crc kubenswrapper[4796]: I1212 04:55:59.269116 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 12 04:55:59 crc kubenswrapper[4796]: I1212 04:55:59.523449 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 04:55:59 crc kubenswrapper[4796]: I1212 04:55:59.523785 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 04:56:00 crc kubenswrapper[4796]: I1212 04:56:00.605550 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:00 crc kubenswrapper[4796]: I1212 04:56:00.605571 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.233888 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.243351 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.288095 4796 generic.go:334] "Generic (PLEG): container finished" podID="08f981fe-4957-44c0-86f6-53f08c41b746" containerID="19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c" exitCode=137 Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.288160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08f981fe-4957-44c0-86f6-53f08c41b746","Type":"ContainerDied","Data":"19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c"} Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.288243 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08f981fe-4957-44c0-86f6-53f08c41b746","Type":"ContainerDied","Data":"226b3ca1f61cd7721849c8cc6d16648c9c15936c0a3a8257d1c8ed47ac516d89"} Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.288267 4796 scope.go:117] "RemoveContainer" containerID="19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.288189 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.291435 4796 generic.go:334] "Generic (PLEG): container finished" podID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerID="be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c" exitCode=137 Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.291507 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.291504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc6a13f9-584c-4f07-b070-c79e4f585c4f","Type":"ContainerDied","Data":"be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c"} Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.291640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc6a13f9-584c-4f07-b070-c79e4f585c4f","Type":"ContainerDied","Data":"0b58dda2bff100e25b603791777442cc1618a1131e6e2f424b632cda143a9fc2"} Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.312193 4796 scope.go:117] "RemoveContainer" containerID="19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c" Dec 12 04:56:03 crc kubenswrapper[4796]: E1212 04:56:03.312903 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c\": container with ID starting with 19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c not found: ID does not exist" containerID="19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.312944 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c"} err="failed to get container status \"19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c\": rpc error: code = NotFound desc = could not find container \"19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c\": container with ID starting with 19bfc41379308e3443f897769c3c28c07541e293c0a2c360306faef8376dac6c not found: ID does not exist" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.312977 4796 scope.go:117] "RemoveContainer" containerID="be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.334128 4796 scope.go:117] "RemoveContainer" containerID="4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.353659 4796 scope.go:117] "RemoveContainer" containerID="be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c" Dec 12 04:56:03 crc kubenswrapper[4796]: E1212 04:56:03.355699 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c\": container with ID starting with be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c not found: ID does not exist" containerID="be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.355754 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c"} err="failed to get container status \"be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c\": rpc error: code = NotFound desc = could not find container \"be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c\": container with ID starting with be5da2ef23a15dded8bec3ed82fdd57749d3cd398ec6f8534ffaa9370a709a8c not found: ID does not exist" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.355787 4796 scope.go:117] "RemoveContainer" containerID="4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644" Dec 12 04:56:03 crc kubenswrapper[4796]: E1212 04:56:03.356156 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644\": container with ID starting with 4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644 not found: ID does not exist" containerID="4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.356204 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644"} err="failed to get container status \"4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644\": rpc error: code = NotFound desc = could not find container \"4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644\": container with ID starting with 4d627f2f536fc23225ab138f8879dc420491cc177452875314356185a16bb644 not found: ID does not exist" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380165 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6a13f9-584c-4f07-b070-c79e4f585c4f-logs\") pod \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380374 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-config-data\") pod \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380402 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb2vc\" (UniqueName: \"kubernetes.io/projected/bc6a13f9-584c-4f07-b070-c79e4f585c4f-kube-api-access-cb2vc\") pod \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-combined-ca-bundle\") pod \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\" (UID: \"bc6a13f9-584c-4f07-b070-c79e4f585c4f\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380467 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-combined-ca-bundle\") pod \"08f981fe-4957-44c0-86f6-53f08c41b746\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-config-data\") pod \"08f981fe-4957-44c0-86f6-53f08c41b746\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.380654 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7f5j\" (UniqueName: \"kubernetes.io/projected/08f981fe-4957-44c0-86f6-53f08c41b746-kube-api-access-m7f5j\") pod \"08f981fe-4957-44c0-86f6-53f08c41b746\" (UID: \"08f981fe-4957-44c0-86f6-53f08c41b746\") " Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.381432 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6a13f9-584c-4f07-b070-c79e4f585c4f-logs" (OuterVolumeSpecName: "logs") pod "bc6a13f9-584c-4f07-b070-c79e4f585c4f" (UID: "bc6a13f9-584c-4f07-b070-c79e4f585c4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.387498 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6a13f9-584c-4f07-b070-c79e4f585c4f-kube-api-access-cb2vc" (OuterVolumeSpecName: "kube-api-access-cb2vc") pod "bc6a13f9-584c-4f07-b070-c79e4f585c4f" (UID: "bc6a13f9-584c-4f07-b070-c79e4f585c4f"). InnerVolumeSpecName "kube-api-access-cb2vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.387954 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f981fe-4957-44c0-86f6-53f08c41b746-kube-api-access-m7f5j" (OuterVolumeSpecName: "kube-api-access-m7f5j") pod "08f981fe-4957-44c0-86f6-53f08c41b746" (UID: "08f981fe-4957-44c0-86f6-53f08c41b746"). InnerVolumeSpecName "kube-api-access-m7f5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.411562 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08f981fe-4957-44c0-86f6-53f08c41b746" (UID: "08f981fe-4957-44c0-86f6-53f08c41b746"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.414019 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-config-data" (OuterVolumeSpecName: "config-data") pod "bc6a13f9-584c-4f07-b070-c79e4f585c4f" (UID: "bc6a13f9-584c-4f07-b070-c79e4f585c4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.414549 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc6a13f9-584c-4f07-b070-c79e4f585c4f" (UID: "bc6a13f9-584c-4f07-b070-c79e4f585c4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.415634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-config-data" (OuterVolumeSpecName: "config-data") pod "08f981fe-4957-44c0-86f6-53f08c41b746" (UID: "08f981fe-4957-44c0-86f6-53f08c41b746"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.482487 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.482685 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb2vc\" (UniqueName: \"kubernetes.io/projected/bc6a13f9-584c-4f07-b070-c79e4f585c4f-kube-api-access-cb2vc\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.482748 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6a13f9-584c-4f07-b070-c79e4f585c4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.482856 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.482937 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f981fe-4957-44c0-86f6-53f08c41b746-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.482994 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7f5j\" (UniqueName: \"kubernetes.io/projected/08f981fe-4957-44c0-86f6-53f08c41b746-kube-api-access-m7f5j\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.483052 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6a13f9-584c-4f07-b070-c79e4f585c4f-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.616681 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.627641 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.643907 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.658813 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.669471 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: E1212 04:56:03.682627 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f981fe-4957-44c0-86f6-53f08c41b746" containerName="nova-cell1-novncproxy-novncproxy" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.682673 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f981fe-4957-44c0-86f6-53f08c41b746" containerName="nova-cell1-novncproxy-novncproxy" Dec 12 04:56:03 crc kubenswrapper[4796]: E1212 04:56:03.682720 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-metadata" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.682730 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-metadata" Dec 12 04:56:03 crc kubenswrapper[4796]: E1212 04:56:03.682741 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-log" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.682751 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-log" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.684693 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f981fe-4957-44c0-86f6-53f08c41b746" containerName="nova-cell1-novncproxy-novncproxy" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.684735 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-metadata" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.684753 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" containerName="nova-metadata-log" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.710325 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.714126 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.714235 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.714366 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.749607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.758845 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.761170 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.762993 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.763737 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.768744 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.893856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf8rs\" (UniqueName: \"kubernetes.io/projected/dea26373-27c8-4cf2-999f-a20004ce50c3-kube-api-access-bf8rs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894252 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea26373-27c8-4cf2-999f-a20004ce50c3-logs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-config-data\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894540 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894637 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.894909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2gt\" (UniqueName: \"kubernetes.io/projected/cb994df9-2eed-4089-9770-ccb138bf3c80-kube-api-access-qr2gt\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.895035 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.996643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-config-data\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.997086 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.997496 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.997821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.998057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.998423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2gt\" (UniqueName: \"kubernetes.io/projected/cb994df9-2eed-4089-9770-ccb138bf3c80-kube-api-access-qr2gt\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.998878 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.999250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf8rs\" (UniqueName: \"kubernetes.io/projected/dea26373-27c8-4cf2-999f-a20004ce50c3-kube-api-access-bf8rs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.999527 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea26373-27c8-4cf2-999f-a20004ce50c3-logs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:03 crc kubenswrapper[4796]: I1212 04:56:03.999774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.007244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.007856 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea26373-27c8-4cf2-999f-a20004ce50c3-logs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.012954 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.019913 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-config-data\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.019943 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.020561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.021849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf8rs\" (UniqueName: \"kubernetes.io/projected/dea26373-27c8-4cf2-999f-a20004ce50c3-kube-api-access-bf8rs\") pod \"nova-metadata-0\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " pod="openstack/nova-metadata-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.022500 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.023312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb994df9-2eed-4089-9770-ccb138bf3c80-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.035774 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2gt\" (UniqueName: \"kubernetes.io/projected/cb994df9-2eed-4089-9770-ccb138bf3c80-kube-api-access-qr2gt\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb994df9-2eed-4089-9770-ccb138bf3c80\") " pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.051036 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.134072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:04 crc kubenswrapper[4796]: W1212 04:56:04.532100 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb994df9_2eed_4089_9770_ccb138bf3c80.slice/crio-c5c7e151e7d8bd24074993ce46d8e2d7c369c78ad2c8aaa50f8d6856a65783cf WatchSource:0}: Error finding container c5c7e151e7d8bd24074993ce46d8e2d7c369c78ad2c8aaa50f8d6856a65783cf: Status 404 returned error can't find the container with id c5c7e151e7d8bd24074993ce46d8e2d7c369c78ad2c8aaa50f8d6856a65783cf Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.532115 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 12 04:56:04 crc kubenswrapper[4796]: I1212 04:56:04.628424 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:04 crc kubenswrapper[4796]: W1212 04:56:04.629446 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddea26373_27c8_4cf2_999f_a20004ce50c3.slice/crio-d63faa8781aabdae4f24a18c6bb800fd0a352a3c1c5900dbe2d01a9d67c18320 WatchSource:0}: Error finding container d63faa8781aabdae4f24a18c6bb800fd0a352a3c1c5900dbe2d01a9d67c18320: Status 404 returned error can't find the container with id d63faa8781aabdae4f24a18c6bb800fd0a352a3c1c5900dbe2d01a9d67c18320 Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.317701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb994df9-2eed-4089-9770-ccb138bf3c80","Type":"ContainerStarted","Data":"5776367f80bf85a0bfe42f93f82868348bc5348cff5f9a0751ae2e5814bdbacc"} Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.317737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb994df9-2eed-4089-9770-ccb138bf3c80","Type":"ContainerStarted","Data":"c5c7e151e7d8bd24074993ce46d8e2d7c369c78ad2c8aaa50f8d6856a65783cf"} Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.320239 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dea26373-27c8-4cf2-999f-a20004ce50c3","Type":"ContainerStarted","Data":"fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30"} Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.320269 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dea26373-27c8-4cf2-999f-a20004ce50c3","Type":"ContainerStarted","Data":"2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601"} Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.320291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dea26373-27c8-4cf2-999f-a20004ce50c3","Type":"ContainerStarted","Data":"d63faa8781aabdae4f24a18c6bb800fd0a352a3c1c5900dbe2d01a9d67c18320"} Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.370689 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.370669851 podStartE2EDuration="2.370669851s" podCreationTimestamp="2025-12-12 04:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:05.336131363 +0000 UTC m=+1356.212148530" watchObservedRunningTime="2025-12-12 04:56:05.370669851 +0000 UTC m=+1356.246686998" Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.384890 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.384868484 podStartE2EDuration="2.384868484s" podCreationTimestamp="2025-12-12 04:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:05.357083247 +0000 UTC m=+1356.233100404" watchObservedRunningTime="2025-12-12 04:56:05.384868484 +0000 UTC m=+1356.260885631" Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.429399 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f981fe-4957-44c0-86f6-53f08c41b746" path="/var/lib/kubelet/pods/08f981fe-4957-44c0-86f6-53f08c41b746/volumes" Dec 12 04:56:05 crc kubenswrapper[4796]: I1212 04:56:05.430217 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6a13f9-584c-4f07-b070-c79e4f585c4f" path="/var/lib/kubelet/pods/bc6a13f9-584c-4f07-b070-c79e4f585c4f/volumes" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.052108 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.134365 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.134420 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.526597 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.527074 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.527883 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 04:56:09 crc kubenswrapper[4796]: I1212 04:56:09.529467 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.364629 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.369159 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.623638 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-j4lm4"] Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.625670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.656332 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-j4lm4"] Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.767661 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.767741 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.767784 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddngd\" (UniqueName: \"kubernetes.io/projected/306331bd-a744-4aa2-8736-e253119cd622-kube-api-access-ddngd\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.767843 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.767980 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.768009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-config\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.870108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.870185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.870220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddngd\" (UniqueName: \"kubernetes.io/projected/306331bd-a744-4aa2-8736-e253119cd622-kube-api-access-ddngd\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.870265 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.870321 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.870339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-config\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.871156 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-config\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.872153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.872802 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.873019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.873072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.892653 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddngd\" (UniqueName: \"kubernetes.io/projected/306331bd-a744-4aa2-8736-e253119cd622-kube-api-access-ddngd\") pod \"dnsmasq-dns-cd5cbd7b9-j4lm4\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:10 crc kubenswrapper[4796]: I1212 04:56:10.971195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:11 crc kubenswrapper[4796]: I1212 04:56:11.445103 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 12 04:56:11 crc kubenswrapper[4796]: W1212 04:56:11.611435 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod306331bd_a744_4aa2_8736_e253119cd622.slice/crio-1e7c86fbc585898f225fb2640409057ecc48000b519a38ea80d6de33ff4838b9 WatchSource:0}: Error finding container 1e7c86fbc585898f225fb2640409057ecc48000b519a38ea80d6de33ff4838b9: Status 404 returned error can't find the container with id 1e7c86fbc585898f225fb2640409057ecc48000b519a38ea80d6de33ff4838b9 Dec 12 04:56:11 crc kubenswrapper[4796]: I1212 04:56:11.620499 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-j4lm4"] Dec 12 04:56:12 crc kubenswrapper[4796]: I1212 04:56:12.386132 4796 generic.go:334] "Generic (PLEG): container finished" podID="306331bd-a744-4aa2-8736-e253119cd622" containerID="f031963b565fec334496e122754415d2636e3bbd33ca0988c7ce634e3e8e572c" exitCode=0 Dec 12 04:56:12 crc kubenswrapper[4796]: I1212 04:56:12.386255 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" event={"ID":"306331bd-a744-4aa2-8736-e253119cd622","Type":"ContainerDied","Data":"f031963b565fec334496e122754415d2636e3bbd33ca0988c7ce634e3e8e572c"} Dec 12 04:56:12 crc kubenswrapper[4796]: I1212 04:56:12.386456 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" event={"ID":"306331bd-a744-4aa2-8736-e253119cd622","Type":"ContainerStarted","Data":"1e7c86fbc585898f225fb2640409057ecc48000b519a38ea80d6de33ff4838b9"} Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.034530 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.396410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" event={"ID":"306331bd-a744-4aa2-8736-e253119cd622","Type":"ContainerStarted","Data":"f5dd80b8c6604503c4c8d65ed2ea620db02113e3a305668630aa62cc90746740"} Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.396548 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-log" containerID="cri-o://669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63" gracePeriod=30 Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.396599 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-api" containerID="cri-o://a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3" gracePeriod=30 Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.602162 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" podStartSLOduration=3.602142624 podStartE2EDuration="3.602142624s" podCreationTimestamp="2025-12-12 04:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:13.426434072 +0000 UTC m=+1364.302451219" watchObservedRunningTime="2025-12-12 04:56:13.602142624 +0000 UTC m=+1364.478159771" Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.616059 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.616490 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="sg-core" containerID="cri-o://04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e" gracePeriod=30 Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.616638 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="proxy-httpd" containerID="cri-o://abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704" gracePeriod=30 Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.616696 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-notification-agent" containerID="cri-o://e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d" gracePeriod=30 Dec 12 04:56:13 crc kubenswrapper[4796]: I1212 04:56:13.616402 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-central-agent" containerID="cri-o://c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432" gracePeriod=30 Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.052420 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.085231 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.139814 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.139864 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.406589 4796 generic.go:334] "Generic (PLEG): container finished" podID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerID="abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704" exitCode=0 Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.406886 4796 generic.go:334] "Generic (PLEG): container finished" podID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerID="04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e" exitCode=2 Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.406974 4796 generic.go:334] "Generic (PLEG): container finished" podID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerID="c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432" exitCode=0 Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.406861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerDied","Data":"abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704"} Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.407183 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerDied","Data":"04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e"} Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.407315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerDied","Data":"c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432"} Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.410125 4796 generic.go:334] "Generic (PLEG): container finished" podID="a35b619b-50f3-4af0-b515-83931d780694" containerID="669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63" exitCode=143 Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.410231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35b619b-50f3-4af0-b515-83931d780694","Type":"ContainerDied","Data":"669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63"} Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.412195 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.441662 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.709916 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hfnkc"] Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.711452 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.713982 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.714345 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.739527 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hfnkc"] Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.896137 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-scripts\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.896189 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-config-data\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.896212 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.896244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2gj\" (UniqueName: \"kubernetes.io/projected/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-kube-api-access-8x2gj\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.998484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2gj\" (UniqueName: \"kubernetes.io/projected/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-kube-api-access-8x2gj\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.998961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-scripts\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.999067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-config-data\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:14 crc kubenswrapper[4796]: I1212 04:56:14.999154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.007134 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.007369 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-config-data\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.009785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-scripts\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.018912 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2gj\" (UniqueName: \"kubernetes.io/projected/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-kube-api-access-8x2gj\") pod \"nova-cell1-cell-mapping-hfnkc\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.029540 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.152482 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.152768 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:15 crc kubenswrapper[4796]: I1212 04:56:15.574492 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hfnkc"] Dec 12 04:56:15 crc kubenswrapper[4796]: W1212 04:56:15.584228 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b94b77_fe1e_4a8f_b334_8f232c6c3bf9.slice/crio-cdd2181e39bba8717d83e182f1c9d7263be725e4535f1b44a94bef2912ff83ed WatchSource:0}: Error finding container cdd2181e39bba8717d83e182f1c9d7263be725e4535f1b44a94bef2912ff83ed: Status 404 returned error can't find the container with id cdd2181e39bba8717d83e182f1c9d7263be725e4535f1b44a94bef2912ff83ed Dec 12 04:56:16 crc kubenswrapper[4796]: I1212 04:56:16.492499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hfnkc" event={"ID":"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9","Type":"ContainerStarted","Data":"dbd8c635bfcc8f3006ee8a45f2acf52d3e73a5c4e5c91679b4ee13fa524f7df1"} Dec 12 04:56:16 crc kubenswrapper[4796]: I1212 04:56:16.493056 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hfnkc" event={"ID":"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9","Type":"ContainerStarted","Data":"cdd2181e39bba8717d83e182f1c9d7263be725e4535f1b44a94bef2912ff83ed"} Dec 12 04:56:16 crc kubenswrapper[4796]: I1212 04:56:16.515631 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hfnkc" podStartSLOduration=2.515610036 podStartE2EDuration="2.515610036s" podCreationTimestamp="2025-12-12 04:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:16.511390552 +0000 UTC m=+1367.387407699" watchObservedRunningTime="2025-12-12 04:56:16.515610036 +0000 UTC m=+1367.391627183" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.028260 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.204900 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89p99\" (UniqueName: \"kubernetes.io/projected/a35b619b-50f3-4af0-b515-83931d780694-kube-api-access-89p99\") pod \"a35b619b-50f3-4af0-b515-83931d780694\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.205008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35b619b-50f3-4af0-b515-83931d780694-logs\") pod \"a35b619b-50f3-4af0-b515-83931d780694\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.205029 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-config-data\") pod \"a35b619b-50f3-4af0-b515-83931d780694\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.205084 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-combined-ca-bundle\") pod \"a35b619b-50f3-4af0-b515-83931d780694\" (UID: \"a35b619b-50f3-4af0-b515-83931d780694\") " Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.206607 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a35b619b-50f3-4af0-b515-83931d780694-logs" (OuterVolumeSpecName: "logs") pod "a35b619b-50f3-4af0-b515-83931d780694" (UID: "a35b619b-50f3-4af0-b515-83931d780694"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.257431 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35b619b-50f3-4af0-b515-83931d780694" (UID: "a35b619b-50f3-4af0-b515-83931d780694"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.258409 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-config-data" (OuterVolumeSpecName: "config-data") pod "a35b619b-50f3-4af0-b515-83931d780694" (UID: "a35b619b-50f3-4af0-b515-83931d780694"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.259645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35b619b-50f3-4af0-b515-83931d780694-kube-api-access-89p99" (OuterVolumeSpecName: "kube-api-access-89p99") pod "a35b619b-50f3-4af0-b515-83931d780694" (UID: "a35b619b-50f3-4af0-b515-83931d780694"). InnerVolumeSpecName "kube-api-access-89p99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.307010 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.307047 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89p99\" (UniqueName: \"kubernetes.io/projected/a35b619b-50f3-4af0-b515-83931d780694-kube-api-access-89p99\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.307058 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35b619b-50f3-4af0-b515-83931d780694-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.307068 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35b619b-50f3-4af0-b515-83931d780694-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.503951 4796 generic.go:334] "Generic (PLEG): container finished" podID="a35b619b-50f3-4af0-b515-83931d780694" containerID="a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3" exitCode=0 Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.504966 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.505588 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35b619b-50f3-4af0-b515-83931d780694","Type":"ContainerDied","Data":"a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3"} Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.505623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35b619b-50f3-4af0-b515-83931d780694","Type":"ContainerDied","Data":"d05f7097069d3e2b0b24ceabd4e65efb7ecb4a84df71ea0588395ed59910e8ef"} Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.505645 4796 scope.go:117] "RemoveContainer" containerID="a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.586534 4796 scope.go:117] "RemoveContainer" containerID="669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.614550 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.637377 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.655529 4796 scope.go:117] "RemoveContainer" containerID="a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3" Dec 12 04:56:17 crc kubenswrapper[4796]: E1212 04:56:17.656151 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3\": container with ID starting with a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3 not found: ID does not exist" containerID="a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.656186 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3"} err="failed to get container status \"a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3\": rpc error: code = NotFound desc = could not find container \"a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3\": container with ID starting with a127be247a1e53cdef0915e11c65252c46c24ad923b49544fbd0722cf1e1dff3 not found: ID does not exist" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.656389 4796 scope.go:117] "RemoveContainer" containerID="669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63" Dec 12 04:56:17 crc kubenswrapper[4796]: E1212 04:56:17.661522 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63\": container with ID starting with 669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63 not found: ID does not exist" containerID="669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.661574 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63"} err="failed to get container status \"669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63\": rpc error: code = NotFound desc = could not find container \"669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63\": container with ID starting with 669736815b4df09c43fa24c638a4a5b6c8c39d2fcc371f82008e4d8404728a63 not found: ID does not exist" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.661628 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:17 crc kubenswrapper[4796]: E1212 04:56:17.662147 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-log" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.662163 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-log" Dec 12 04:56:17 crc kubenswrapper[4796]: E1212 04:56:17.662193 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-api" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.662202 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-api" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.662490 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-log" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.662512 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35b619b-50f3-4af0-b515-83931d780694" containerName="nova-api-api" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.663845 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.672020 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.672578 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.672786 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.686263 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.817299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-config-data\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.817355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgcr2\" (UniqueName: \"kubernetes.io/projected/8e4c556d-6661-4710-bd9e-b8197e9ea22a-kube-api-access-vgcr2\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.817376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.817443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.817473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e4c556d-6661-4710-bd9e-b8197e9ea22a-logs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.817496 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.918604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgcr2\" (UniqueName: \"kubernetes.io/projected/8e4c556d-6661-4710-bd9e-b8197e9ea22a-kube-api-access-vgcr2\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.918642 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.918715 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.918744 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e4c556d-6661-4710-bd9e-b8197e9ea22a-logs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.918765 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.918835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-config-data\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.921859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e4c556d-6661-4710-bd9e-b8197e9ea22a-logs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.932925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-config-data\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.933444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.952172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.961823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:17 crc kubenswrapper[4796]: I1212 04:56:17.962522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgcr2\" (UniqueName: \"kubernetes.io/projected/8e4c556d-6661-4710-bd9e-b8197e9ea22a-kube-api-access-vgcr2\") pod \"nova-api-0\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " pod="openstack/nova-api-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.007723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.133069 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.238871 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-sg-core-conf-yaml\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239168 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-config-data\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239296 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-scripts\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239339 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4688x\" (UniqueName: \"kubernetes.io/projected/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-kube-api-access-4688x\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-run-httpd\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239436 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-log-httpd\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239495 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-combined-ca-bundle\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.239545 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-ceilometer-tls-certs\") pod \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\" (UID: \"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3\") " Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.240702 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.240882 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.243513 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-kube-api-access-4688x" (OuterVolumeSpecName: "kube-api-access-4688x") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "kube-api-access-4688x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.256813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-scripts" (OuterVolumeSpecName: "scripts") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.342045 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.342071 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4688x\" (UniqueName: \"kubernetes.io/projected/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-kube-api-access-4688x\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.342081 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.342091 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.355238 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.447866 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.463023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.467533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-config-data" (OuterVolumeSpecName: "config-data") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.493093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" (UID: "87bea30e-9e9f-4f74-8619-b09b6fc7b6d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.525097 4796 generic.go:334] "Generic (PLEG): container finished" podID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerID="e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d" exitCode=0 Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.525162 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerDied","Data":"e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d"} Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.525191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87bea30e-9e9f-4f74-8619-b09b6fc7b6d3","Type":"ContainerDied","Data":"0753153a3bd410a16a0e14d1f9d652128b364d407c07b17f45e822b0c5e32bb7"} Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.525207 4796 scope.go:117] "RemoveContainer" containerID="abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.525378 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.558157 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.558475 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.558487 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.563732 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.608448 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.611673 4796 scope.go:117] "RemoveContainer" containerID="04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e" Dec 12 04:56:18 crc kubenswrapper[4796]: W1212 04:56:18.618222 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4c556d_6661_4710_bd9e_b8197e9ea22a.slice/crio-d0613b1a829273e39c2cff0d272d2d260f75699bd2044c5bab9743d361c1d6c8 WatchSource:0}: Error finding container d0613b1a829273e39c2cff0d272d2d260f75699bd2044c5bab9743d361c1d6c8: Status 404 returned error can't find the container with id d0613b1a829273e39c2cff0d272d2d260f75699bd2044c5bab9743d361c1d6c8 Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.622339 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.632397 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.632839 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="sg-core" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.632853 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="sg-core" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.632866 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-notification-agent" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.632872 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-notification-agent" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.632910 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="proxy-httpd" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.632917 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="proxy-httpd" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.632926 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-central-agent" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.632932 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-central-agent" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.633115 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-central-agent" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.633130 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="ceilometer-notification-agent" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.633142 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="sg-core" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.633159 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" containerName="proxy-httpd" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.635038 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.637730 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.637914 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.638055 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.640203 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.687255 4796 scope.go:117] "RemoveContainer" containerID="e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.725058 4796 scope.go:117] "RemoveContainer" containerID="c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.761221 4796 scope.go:117] "RemoveContainer" containerID="abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.761806 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704\": container with ID starting with abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704 not found: ID does not exist" containerID="abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.761862 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704"} err="failed to get container status \"abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704\": rpc error: code = NotFound desc = could not find container \"abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704\": container with ID starting with abe43049ae676990d53d0d271544d5595b97b7e3c6ef9e159f3b3c17b6bae704 not found: ID does not exist" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.761895 4796 scope.go:117] "RemoveContainer" containerID="04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.762399 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e\": container with ID starting with 04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e not found: ID does not exist" containerID="04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.762427 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e"} err="failed to get container status \"04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e\": rpc error: code = NotFound desc = could not find container \"04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e\": container with ID starting with 04bb9334d836f183ac9de959f948597dfdd017feb6587759e0d8952d21be485e not found: ID does not exist" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.762450 4796 scope.go:117] "RemoveContainer" containerID="e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.762788 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d\": container with ID starting with e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d not found: ID does not exist" containerID="e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.762814 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d"} err="failed to get container status \"e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d\": rpc error: code = NotFound desc = could not find container \"e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d\": container with ID starting with e625481640297f3f592bb87a14a508b6f4506f0ea6bc6168548e4eb6827dc97d not found: ID does not exist" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.762832 4796 scope.go:117] "RemoveContainer" containerID="c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432" Dec 12 04:56:18 crc kubenswrapper[4796]: E1212 04:56:18.763104 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432\": container with ID starting with c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432 not found: ID does not exist" containerID="c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.763131 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432"} err="failed to get container status \"c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432\": rpc error: code = NotFound desc = could not find container \"c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432\": container with ID starting with c38c39f6685c1d0b1a4044fafe3937494cfe326fa5ed13d532e201541daf3432 not found: ID does not exist" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769191 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-run-httpd\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-log-httpd\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-config-data\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769414 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-scripts\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.769834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgcw8\" (UniqueName: \"kubernetes.io/projected/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-kube-api-access-mgcw8\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.871904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.871998 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.872030 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgcw8\" (UniqueName: \"kubernetes.io/projected/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-kube-api-access-mgcw8\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.872070 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-run-httpd\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.872117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-log-httpd\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.872228 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-config-data\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.872260 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-scripts\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.872304 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.873381 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-log-httpd\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.873788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-run-httpd\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.875743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.876852 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.878810 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-config-data\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.883476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.888771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-scripts\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.891744 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgcw8\" (UniqueName: \"kubernetes.io/projected/b9f1d6ee-b301-4827-9a5b-8a98d43319bc-kube-api-access-mgcw8\") pod \"ceilometer-0\" (UID: \"b9f1d6ee-b301-4827-9a5b-8a98d43319bc\") " pod="openstack/ceilometer-0" Dec 12 04:56:18 crc kubenswrapper[4796]: I1212 04:56:18.966070 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.425341 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bea30e-9e9f-4f74-8619-b09b6fc7b6d3" path="/var/lib/kubelet/pods/87bea30e-9e9f-4f74-8619-b09b6fc7b6d3/volumes" Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.426538 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35b619b-50f3-4af0-b515-83931d780694" path="/var/lib/kubelet/pods/a35b619b-50f3-4af0-b515-83931d780694/volumes" Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.579889 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.585940 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e4c556d-6661-4710-bd9e-b8197e9ea22a","Type":"ContainerStarted","Data":"2e4a64f2b5521093139463c2e03492abcab5d1a34a2db3848e54cf1611eeee1c"} Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.585996 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e4c556d-6661-4710-bd9e-b8197e9ea22a","Type":"ContainerStarted","Data":"210bb40c4a78a6ab40419bb76ba4ac01ac26a3b643dd117c814e4233f30d1cda"} Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.586012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e4c556d-6661-4710-bd9e-b8197e9ea22a","Type":"ContainerStarted","Data":"d0613b1a829273e39c2cff0d272d2d260f75699bd2044c5bab9743d361c1d6c8"} Dec 12 04:56:19 crc kubenswrapper[4796]: I1212 04:56:19.622943 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.622922383 podStartE2EDuration="2.622922383s" podCreationTimestamp="2025-12-12 04:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:19.61177289 +0000 UTC m=+1370.487790057" watchObservedRunningTime="2025-12-12 04:56:19.622922383 +0000 UTC m=+1370.498939530" Dec 12 04:56:20 crc kubenswrapper[4796]: I1212 04:56:20.608793 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9f1d6ee-b301-4827-9a5b-8a98d43319bc","Type":"ContainerStarted","Data":"a2bf14c9fcb4fc25ee27abcecd74a3482502ce5c3537231df81de9460809cbf3"} Dec 12 04:56:20 crc kubenswrapper[4796]: I1212 04:56:20.609112 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9f1d6ee-b301-4827-9a5b-8a98d43319bc","Type":"ContainerStarted","Data":"0f0faa5599346801f53aa19b4e94c2b96dc656802619d3c868584c7909a873ae"} Dec 12 04:56:20 crc kubenswrapper[4796]: I1212 04:56:20.978835 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:56:21 crc kubenswrapper[4796]: I1212 04:56:21.102401 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-gmxgh"] Dec 12 04:56:21 crc kubenswrapper[4796]: I1212 04:56:21.102652 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerName="dnsmasq-dns" containerID="cri-o://57c51918b416e6698eb61899272ca25b3c23af5039836958e4e1276f308965e2" gracePeriod=10 Dec 12 04:56:21 crc kubenswrapper[4796]: I1212 04:56:21.632435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9f1d6ee-b301-4827-9a5b-8a98d43319bc","Type":"ContainerStarted","Data":"63d8d8a0bdba4d75f57e89c8449f4f16d33e87fcf7edaccef3b793c415bb5925"} Dec 12 04:56:21 crc kubenswrapper[4796]: I1212 04:56:21.638853 4796 generic.go:334] "Generic (PLEG): container finished" podID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerID="57c51918b416e6698eb61899272ca25b3c23af5039836958e4e1276f308965e2" exitCode=0 Dec 12 04:56:21 crc kubenswrapper[4796]: I1212 04:56:21.638893 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" event={"ID":"9580c7ba-bc82-4bbb-b14b-d5d527390627","Type":"ContainerDied","Data":"57c51918b416e6698eb61899272ca25b3c23af5039836958e4e1276f308965e2"} Dec 12 04:56:21 crc kubenswrapper[4796]: I1212 04:56:21.912715 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.076061 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h569z\" (UniqueName: \"kubernetes.io/projected/9580c7ba-bc82-4bbb-b14b-d5d527390627-kube-api-access-h569z\") pod \"9580c7ba-bc82-4bbb-b14b-d5d527390627\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.076133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-sb\") pod \"9580c7ba-bc82-4bbb-b14b-d5d527390627\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.076253 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-swift-storage-0\") pod \"9580c7ba-bc82-4bbb-b14b-d5d527390627\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.076359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-svc\") pod \"9580c7ba-bc82-4bbb-b14b-d5d527390627\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.076415 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-nb\") pod \"9580c7ba-bc82-4bbb-b14b-d5d527390627\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.076540 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-config\") pod \"9580c7ba-bc82-4bbb-b14b-d5d527390627\" (UID: \"9580c7ba-bc82-4bbb-b14b-d5d527390627\") " Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.122527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9580c7ba-bc82-4bbb-b14b-d5d527390627-kube-api-access-h569z" (OuterVolumeSpecName: "kube-api-access-h569z") pod "9580c7ba-bc82-4bbb-b14b-d5d527390627" (UID: "9580c7ba-bc82-4bbb-b14b-d5d527390627"). InnerVolumeSpecName "kube-api-access-h569z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.179471 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h569z\" (UniqueName: \"kubernetes.io/projected/9580c7ba-bc82-4bbb-b14b-d5d527390627-kube-api-access-h569z\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.220016 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9580c7ba-bc82-4bbb-b14b-d5d527390627" (UID: "9580c7ba-bc82-4bbb-b14b-d5d527390627"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.247999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9580c7ba-bc82-4bbb-b14b-d5d527390627" (UID: "9580c7ba-bc82-4bbb-b14b-d5d527390627"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.265863 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9580c7ba-bc82-4bbb-b14b-d5d527390627" (UID: "9580c7ba-bc82-4bbb-b14b-d5d527390627"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.269692 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9580c7ba-bc82-4bbb-b14b-d5d527390627" (UID: "9580c7ba-bc82-4bbb-b14b-d5d527390627"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.279164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-config" (OuterVolumeSpecName: "config") pod "9580c7ba-bc82-4bbb-b14b-d5d527390627" (UID: "9580c7ba-bc82-4bbb-b14b-d5d527390627"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.281418 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.281458 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.281471 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.281479 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.281488 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9580c7ba-bc82-4bbb-b14b-d5d527390627-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.647309 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" event={"ID":"9580c7ba-bc82-4bbb-b14b-d5d527390627","Type":"ContainerDied","Data":"7495c93fb1c45ea994952ed89dc70c80f915baba89980585f578cf0c50edd91b"} Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.647603 4796 scope.go:117] "RemoveContainer" containerID="57c51918b416e6698eb61899272ca25b3c23af5039836958e4e1276f308965e2" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.647317 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-gmxgh" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.650325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9f1d6ee-b301-4827-9a5b-8a98d43319bc","Type":"ContainerStarted","Data":"376f9e4ba3a699455971a096279c7d16a452e15ee23ab83f622d06ee9c7b9dc7"} Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.686759 4796 scope.go:117] "RemoveContainer" containerID="c8fd3c37e0ab1e706a4aca597d1522d560a3ef7864745f454f08a894cf06089e" Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.736625 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-gmxgh"] Dec 12 04:56:22 crc kubenswrapper[4796]: I1212 04:56:22.750187 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-gmxgh"] Dec 12 04:56:23 crc kubenswrapper[4796]: I1212 04:56:23.420920 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" path="/var/lib/kubelet/pods/9580c7ba-bc82-4bbb-b14b-d5d527390627/volumes" Dec 12 04:56:23 crc kubenswrapper[4796]: I1212 04:56:23.674904 4796 generic.go:334] "Generic (PLEG): container finished" podID="18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" containerID="dbd8c635bfcc8f3006ee8a45f2acf52d3e73a5c4e5c91679b4ee13fa524f7df1" exitCode=0 Dec 12 04:56:23 crc kubenswrapper[4796]: I1212 04:56:23.675470 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hfnkc" event={"ID":"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9","Type":"ContainerDied","Data":"dbd8c635bfcc8f3006ee8a45f2acf52d3e73a5c4e5c91679b4ee13fa524f7df1"} Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.143496 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.145249 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.157361 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.705552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9f1d6ee-b301-4827-9a5b-8a98d43319bc","Type":"ContainerStarted","Data":"a53af0567e2ad535b42d98934d1709505060f899f5b64a4975f5f905c525ba63"} Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.705689 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.720200 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 04:56:24 crc kubenswrapper[4796]: I1212 04:56:24.757159 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.500728327 podStartE2EDuration="6.75714376s" podCreationTimestamp="2025-12-12 04:56:18 +0000 UTC" firstStartedPulling="2025-12-12 04:56:19.583615414 +0000 UTC m=+1370.459632561" lastFinishedPulling="2025-12-12 04:56:23.840030847 +0000 UTC m=+1374.716047994" observedRunningTime="2025-12-12 04:56:24.732665464 +0000 UTC m=+1375.608682611" watchObservedRunningTime="2025-12-12 04:56:24.75714376 +0000 UTC m=+1375.633160897" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.119414 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.239424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2gj\" (UniqueName: \"kubernetes.io/projected/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-kube-api-access-8x2gj\") pod \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.239698 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-scripts\") pod \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.239879 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-combined-ca-bundle\") pod \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.239983 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-config-data\") pod \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\" (UID: \"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9\") " Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.257532 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-kube-api-access-8x2gj" (OuterVolumeSpecName: "kube-api-access-8x2gj") pod "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" (UID: "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9"). InnerVolumeSpecName "kube-api-access-8x2gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.257581 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-scripts" (OuterVolumeSpecName: "scripts") pod "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" (UID: "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.274265 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-config-data" (OuterVolumeSpecName: "config-data") pod "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" (UID: "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.302417 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" (UID: "18b94b77-fe1e-4a8f-b334-8f232c6c3bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.342546 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2gj\" (UniqueName: \"kubernetes.io/projected/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-kube-api-access-8x2gj\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.342594 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.342608 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.342620 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.721258 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hfnkc" event={"ID":"18b94b77-fe1e-4a8f-b334-8f232c6c3bf9","Type":"ContainerDied","Data":"cdd2181e39bba8717d83e182f1c9d7263be725e4535f1b44a94bef2912ff83ed"} Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.721320 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd2181e39bba8717d83e182f1c9d7263be725e4535f1b44a94bef2912ff83ed" Dec 12 04:56:25 crc kubenswrapper[4796]: I1212 04:56:25.721519 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hfnkc" Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.041337 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.041768 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" containerName="nova-scheduler-scheduler" containerID="cri-o://5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993" gracePeriod=30 Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.049716 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.057253 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.057531 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-log" containerID="cri-o://210bb40c4a78a6ab40419bb76ba4ac01ac26a3b643dd117c814e4233f30d1cda" gracePeriod=30 Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.057593 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-api" containerID="cri-o://2e4a64f2b5521093139463c2e03492abcab5d1a34a2db3848e54cf1611eeee1c" gracePeriod=30 Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.763189 4796 generic.go:334] "Generic (PLEG): container finished" podID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerID="2e4a64f2b5521093139463c2e03492abcab5d1a34a2db3848e54cf1611eeee1c" exitCode=0 Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.763218 4796 generic.go:334] "Generic (PLEG): container finished" podID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerID="210bb40c4a78a6ab40419bb76ba4ac01ac26a3b643dd117c814e4233f30d1cda" exitCode=143 Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.763861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e4c556d-6661-4710-bd9e-b8197e9ea22a","Type":"ContainerDied","Data":"2e4a64f2b5521093139463c2e03492abcab5d1a34a2db3848e54cf1611eeee1c"} Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.763923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e4c556d-6661-4710-bd9e-b8197e9ea22a","Type":"ContainerDied","Data":"210bb40c4a78a6ab40419bb76ba4ac01ac26a3b643dd117c814e4233f30d1cda"} Dec 12 04:56:26 crc kubenswrapper[4796]: I1212 04:56:26.946232 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.086148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-config-data\") pod \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.086267 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-combined-ca-bundle\") pod \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.086342 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgcr2\" (UniqueName: \"kubernetes.io/projected/8e4c556d-6661-4710-bd9e-b8197e9ea22a-kube-api-access-vgcr2\") pod \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.086380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-internal-tls-certs\") pod \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.086427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-public-tls-certs\") pod \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.086472 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e4c556d-6661-4710-bd9e-b8197e9ea22a-logs\") pod \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\" (UID: \"8e4c556d-6661-4710-bd9e-b8197e9ea22a\") " Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.087395 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e4c556d-6661-4710-bd9e-b8197e9ea22a-logs" (OuterVolumeSpecName: "logs") pod "8e4c556d-6661-4710-bd9e-b8197e9ea22a" (UID: "8e4c556d-6661-4710-bd9e-b8197e9ea22a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.098025 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4c556d-6661-4710-bd9e-b8197e9ea22a-kube-api-access-vgcr2" (OuterVolumeSpecName: "kube-api-access-vgcr2") pod "8e4c556d-6661-4710-bd9e-b8197e9ea22a" (UID: "8e4c556d-6661-4710-bd9e-b8197e9ea22a"). InnerVolumeSpecName "kube-api-access-vgcr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.144955 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e4c556d-6661-4710-bd9e-b8197e9ea22a" (UID: "8e4c556d-6661-4710-bd9e-b8197e9ea22a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.163772 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-config-data" (OuterVolumeSpecName: "config-data") pod "8e4c556d-6661-4710-bd9e-b8197e9ea22a" (UID: "8e4c556d-6661-4710-bd9e-b8197e9ea22a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.165378 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e4c556d-6661-4710-bd9e-b8197e9ea22a" (UID: "8e4c556d-6661-4710-bd9e-b8197e9ea22a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.188579 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.188608 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.188617 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgcr2\" (UniqueName: \"kubernetes.io/projected/8e4c556d-6661-4710-bd9e-b8197e9ea22a-kube-api-access-vgcr2\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.188625 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.188633 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e4c556d-6661-4710-bd9e-b8197e9ea22a-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.203077 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e4c556d-6661-4710-bd9e-b8197e9ea22a" (UID: "8e4c556d-6661-4710-bd9e-b8197e9ea22a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.290460 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e4c556d-6661-4710-bd9e-b8197e9ea22a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.774751 4796 generic.go:334] "Generic (PLEG): container finished" podID="7913672c-384c-472c-89a8-0d546f345a28" containerID="40d4f2befd8046441735f846551f6cece578cad2c1c729b3a48374abe66a2e92" exitCode=137 Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.774845 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerDied","Data":"40d4f2befd8046441735f846551f6cece578cad2c1c729b3a48374abe66a2e92"} Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.775035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb55bccb4-z8p6q" event={"ID":"7913672c-384c-472c-89a8-0d546f345a28","Type":"ContainerStarted","Data":"cbee61e28972a73e4ea32f9e6c432f30aca184527d4f0bc0b97120cfeae86d3f"} Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.775056 4796 scope.go:117] "RemoveContainer" containerID="d9a41c51a02fbfc7b83df3206185605b2f04324f46c56506c9aab25a48af1d31" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.781049 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e4c556d-6661-4710-bd9e-b8197e9ea22a","Type":"ContainerDied","Data":"d0613b1a829273e39c2cff0d272d2d260f75699bd2044c5bab9743d361c1d6c8"} Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.781137 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.790872 4796 generic.go:334] "Generic (PLEG): container finished" podID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerID="27ffa7fc276a9c228093d346b21bec0cd22db41b04cca501cb5fd0a4340fbf3c" exitCode=137 Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.791059 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-log" containerID="cri-o://2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601" gracePeriod=30 Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.791130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerDied","Data":"27ffa7fc276a9c228093d346b21bec0cd22db41b04cca501cb5fd0a4340fbf3c"} Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.791156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerStarted","Data":"1403caedf16585011344533a763725c13ff01db71c0904473a957acd25e5a3ff"} Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.791768 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-metadata" containerID="cri-o://fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30" gracePeriod=30 Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.853864 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.867735 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.881175 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:27 crc kubenswrapper[4796]: E1212 04:56:27.882221 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-log" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.882321 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-log" Dec 12 04:56:27 crc kubenswrapper[4796]: E1212 04:56:27.882421 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-api" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.882488 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-api" Dec 12 04:56:27 crc kubenswrapper[4796]: E1212 04:56:27.882601 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerName="init" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.882675 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerName="init" Dec 12 04:56:27 crc kubenswrapper[4796]: E1212 04:56:27.882759 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerName="dnsmasq-dns" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.882826 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerName="dnsmasq-dns" Dec 12 04:56:27 crc kubenswrapper[4796]: E1212 04:56:27.882905 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" containerName="nova-manage" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.882991 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" containerName="nova-manage" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.883366 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" containerName="nova-manage" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.883473 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-api" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.883555 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" containerName="nova-api-log" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.883643 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9580c7ba-bc82-4bbb-b14b-d5d527390627" containerName="dnsmasq-dns" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.885066 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.889301 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.889749 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.889992 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.918139 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:27 crc kubenswrapper[4796]: I1212 04:56:27.995581 4796 scope.go:117] "RemoveContainer" containerID="2e4a64f2b5521093139463c2e03492abcab5d1a34a2db3848e54cf1611eeee1c" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.008985 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.009056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-public-tls-certs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.009112 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-config-data\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.009138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.009199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-logs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.009222 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwbqf\" (UniqueName: \"kubernetes.io/projected/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-kube-api-access-rwbqf\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.052427 4796 scope.go:117] "RemoveContainer" containerID="210bb40c4a78a6ab40419bb76ba4ac01ac26a3b643dd117c814e4233f30d1cda" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.092091 4796 scope.go:117] "RemoveContainer" containerID="b50dcc280f6394fbab162fdb44d787620aad63c8ea6483a45866f68fc3afb35a" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111053 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwbqf\" (UniqueName: \"kubernetes.io/projected/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-kube-api-access-rwbqf\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111151 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-public-tls-certs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111200 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-config-data\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-logs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.111657 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-logs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.117639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.118825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.119235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-public-tls-certs\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.122713 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-config-data\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.132921 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwbqf\" (UniqueName: \"kubernetes.io/projected/f70e0dab-4b7c-4e6f-b28e-76e72492ca1d-kube-api-access-rwbqf\") pod \"nova-api-0\" (UID: \"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d\") " pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.264126 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.546916 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.728507 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fltvb\" (UniqueName: \"kubernetes.io/projected/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-kube-api-access-fltvb\") pod \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.728585 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-combined-ca-bundle\") pod \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.728662 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-config-data\") pod \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\" (UID: \"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7\") " Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.736593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-kube-api-access-fltvb" (OuterVolumeSpecName: "kube-api-access-fltvb") pod "10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" (UID: "10ca7515-0dcd-49f1-8431-a8ddf3b33ea7"). InnerVolumeSpecName "kube-api-access-fltvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.772427 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-config-data" (OuterVolumeSpecName: "config-data") pod "10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" (UID: "10ca7515-0dcd-49f1-8431-a8ddf3b33ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.818419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" (UID: "10ca7515-0dcd-49f1-8431-a8ddf3b33ea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.823010 4796 generic.go:334] "Generic (PLEG): container finished" podID="10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" containerID="5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993" exitCode=0 Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.823060 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7","Type":"ContainerDied","Data":"5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993"} Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.823110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10ca7515-0dcd-49f1-8431-a8ddf3b33ea7","Type":"ContainerDied","Data":"2ac00d1ddd93828008cae6ef5e7197bd7cca791b85127c0b97ddd3eb597a5164"} Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.823144 4796 scope.go:117] "RemoveContainer" containerID="5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.823666 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.830672 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fltvb\" (UniqueName: \"kubernetes.io/projected/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-kube-api-access-fltvb\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.830708 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.830721 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.845477 4796 generic.go:334] "Generic (PLEG): container finished" podID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerID="2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601" exitCode=143 Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.845585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dea26373-27c8-4cf2-999f-a20004ce50c3","Type":"ContainerDied","Data":"2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601"} Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.887497 4796 scope.go:117] "RemoveContainer" containerID="5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993" Dec 12 04:56:28 crc kubenswrapper[4796]: E1212 04:56:28.888647 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993\": container with ID starting with 5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993 not found: ID does not exist" containerID="5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.888683 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993"} err="failed to get container status \"5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993\": rpc error: code = NotFound desc = could not find container \"5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993\": container with ID starting with 5a2d6d0febad42c1b2ab27bd3386e86506e5a1d4feb3ad36cfb270d8bc19f993 not found: ID does not exist" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.891319 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.906001 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.935381 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:56:28 crc kubenswrapper[4796]: E1212 04:56:28.935947 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" containerName="nova-scheduler-scheduler" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.935969 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" containerName="nova-scheduler-scheduler" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.936202 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" containerName="nova-scheduler-scheduler" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.936982 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.947821 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.957181 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:56:28 crc kubenswrapper[4796]: I1212 04:56:28.965638 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.038444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df191340-1fad-4c88-b12c-a4af0fc96924-config-data\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.038500 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8457\" (UniqueName: \"kubernetes.io/projected/df191340-1fad-4c88-b12c-a4af0fc96924-kube-api-access-s8457\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.038605 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df191340-1fad-4c88-b12c-a4af0fc96924-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.146360 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df191340-1fad-4c88-b12c-a4af0fc96924-config-data\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.146426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8457\" (UniqueName: \"kubernetes.io/projected/df191340-1fad-4c88-b12c-a4af0fc96924-kube-api-access-s8457\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.146550 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df191340-1fad-4c88-b12c-a4af0fc96924-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.157833 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df191340-1fad-4c88-b12c-a4af0fc96924-config-data\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.157932 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df191340-1fad-4c88-b12c-a4af0fc96924-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.173436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8457\" (UniqueName: \"kubernetes.io/projected/df191340-1fad-4c88-b12c-a4af0fc96924-kube-api-access-s8457\") pod \"nova-scheduler-0\" (UID: \"df191340-1fad-4c88-b12c-a4af0fc96924\") " pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.273373 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.440066 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ca7515-0dcd-49f1-8431-a8ddf3b33ea7" path="/var/lib/kubelet/pods/10ca7515-0dcd-49f1-8431-a8ddf3b33ea7/volumes" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.441308 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4c556d-6661-4710-bd9e-b8197e9ea22a" path="/var/lib/kubelet/pods/8e4c556d-6661-4710-bd9e-b8197e9ea22a/volumes" Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.898253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d","Type":"ContainerStarted","Data":"b38169774ab31af4436e4e9b640cafd0dca4344b3a377614aeb463647f49e1b7"} Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.898602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d","Type":"ContainerStarted","Data":"4611909de0db0964e4c826ec36ca4d3444b6053ddb37f4041f95aaaf9dd5dfb0"} Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.898618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f70e0dab-4b7c-4e6f-b28e-76e72492ca1d","Type":"ContainerStarted","Data":"0b39aaf521805f7150985152cb9d4c23082460206127020850cb7d98f5e39732"} Dec 12 04:56:29 crc kubenswrapper[4796]: I1212 04:56:29.991018 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.99100182 podStartE2EDuration="2.99100182s" podCreationTimestamp="2025-12-12 04:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:29.924435408 +0000 UTC m=+1380.800452555" watchObservedRunningTime="2025-12-12 04:56:29.99100182 +0000 UTC m=+1380.867018967" Dec 12 04:56:30 crc kubenswrapper[4796]: I1212 04:56:29.999643 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 12 04:56:30 crc kubenswrapper[4796]: I1212 04:56:30.915963 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df191340-1fad-4c88-b12c-a4af0fc96924","Type":"ContainerStarted","Data":"ac13cbc2f66d689939930a01550a53e4604919e196d300d21ca5e4aa1a49902d"} Dec 12 04:56:30 crc kubenswrapper[4796]: I1212 04:56:30.916259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"df191340-1fad-4c88-b12c-a4af0fc96924","Type":"ContainerStarted","Data":"1d7284443e66161a5da86bde3fac9138410ee0befc7131140c2acc6602d25616"} Dec 12 04:56:30 crc kubenswrapper[4796]: I1212 04:56:30.945865 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.945848741 podStartE2EDuration="2.945848741s" podCreationTimestamp="2025-12-12 04:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:30.933904546 +0000 UTC m=+1381.809921703" watchObservedRunningTime="2025-12-12 04:56:30.945848741 +0000 UTC m=+1381.821865888" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.199489 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:49184->10.217.0.195:8775: read: connection reset by peer" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.199577 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:49176->10.217.0.195:8775: read: connection reset by peer" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.742098 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.916314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-nova-metadata-tls-certs\") pod \"dea26373-27c8-4cf2-999f-a20004ce50c3\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.916579 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf8rs\" (UniqueName: \"kubernetes.io/projected/dea26373-27c8-4cf2-999f-a20004ce50c3-kube-api-access-bf8rs\") pod \"dea26373-27c8-4cf2-999f-a20004ce50c3\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.916663 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-combined-ca-bundle\") pod \"dea26373-27c8-4cf2-999f-a20004ce50c3\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.916703 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-config-data\") pod \"dea26373-27c8-4cf2-999f-a20004ce50c3\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.916746 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea26373-27c8-4cf2-999f-a20004ce50c3-logs\") pod \"dea26373-27c8-4cf2-999f-a20004ce50c3\" (UID: \"dea26373-27c8-4cf2-999f-a20004ce50c3\") " Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.917352 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea26373-27c8-4cf2-999f-a20004ce50c3-logs" (OuterVolumeSpecName: "logs") pod "dea26373-27c8-4cf2-999f-a20004ce50c3" (UID: "dea26373-27c8-4cf2-999f-a20004ce50c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.929417 4796 generic.go:334] "Generic (PLEG): container finished" podID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerID="fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30" exitCode=0 Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.929977 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dea26373-27c8-4cf2-999f-a20004ce50c3","Type":"ContainerDied","Data":"fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30"} Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.930021 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dea26373-27c8-4cf2-999f-a20004ce50c3","Type":"ContainerDied","Data":"d63faa8781aabdae4f24a18c6bb800fd0a352a3c1c5900dbe2d01a9d67c18320"} Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.930044 4796 scope.go:117] "RemoveContainer" containerID="fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.930239 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.931312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea26373-27c8-4cf2-999f-a20004ce50c3-kube-api-access-bf8rs" (OuterVolumeSpecName: "kube-api-access-bf8rs") pod "dea26373-27c8-4cf2-999f-a20004ce50c3" (UID: "dea26373-27c8-4cf2-999f-a20004ce50c3"). InnerVolumeSpecName "kube-api-access-bf8rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:56:31 crc kubenswrapper[4796]: I1212 04:56:31.971883 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dea26373-27c8-4cf2-999f-a20004ce50c3" (UID: "dea26373-27c8-4cf2-999f-a20004ce50c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.019187 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf8rs\" (UniqueName: \"kubernetes.io/projected/dea26373-27c8-4cf2-999f-a20004ce50c3-kube-api-access-bf8rs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.019243 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.019260 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dea26373-27c8-4cf2-999f-a20004ce50c3-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.055723 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-config-data" (OuterVolumeSpecName: "config-data") pod "dea26373-27c8-4cf2-999f-a20004ce50c3" (UID: "dea26373-27c8-4cf2-999f-a20004ce50c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.057193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dea26373-27c8-4cf2-999f-a20004ce50c3" (UID: "dea26373-27c8-4cf2-999f-a20004ce50c3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.068083 4796 scope.go:117] "RemoveContainer" containerID="2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.119023 4796 scope.go:117] "RemoveContainer" containerID="fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30" Dec 12 04:56:32 crc kubenswrapper[4796]: E1212 04:56:32.119521 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30\": container with ID starting with fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30 not found: ID does not exist" containerID="fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.119563 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30"} err="failed to get container status \"fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30\": rpc error: code = NotFound desc = could not find container \"fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30\": container with ID starting with fe4a47f5a01399b87e8947461d8be58f9c84773799fc6f12b0edf01931be7d30 not found: ID does not exist" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.119592 4796 scope.go:117] "RemoveContainer" containerID="2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601" Dec 12 04:56:32 crc kubenswrapper[4796]: E1212 04:56:32.120034 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601\": container with ID starting with 2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601 not found: ID does not exist" containerID="2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.120107 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601"} err="failed to get container status \"2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601\": rpc error: code = NotFound desc = could not find container \"2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601\": container with ID starting with 2c9c95f0514bab586dca3729faf2d600fe4b29d9f92a4ad66039f37734fe7601 not found: ID does not exist" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.121225 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.121247 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dea26373-27c8-4cf2-999f-a20004ce50c3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.275632 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.287505 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.306462 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:32 crc kubenswrapper[4796]: E1212 04:56:32.306949 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-log" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.306973 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-log" Dec 12 04:56:32 crc kubenswrapper[4796]: E1212 04:56:32.306992 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-metadata" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.307001 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-metadata" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.315124 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-metadata" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.315179 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" containerName="nova-metadata-log" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.316661 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.320625 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.321221 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.401875 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.431402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-config-data\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.431720 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-logs\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.431883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.432044 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48dw\" (UniqueName: \"kubernetes.io/projected/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-kube-api-access-s48dw\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.432257 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.534371 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.535712 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-config-data\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.535918 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-logs\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.536072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.536257 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48dw\" (UniqueName: \"kubernetes.io/projected/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-kube-api-access-s48dw\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.538896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-logs\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.545560 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-config-data\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.555300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.558704 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48dw\" (UniqueName: \"kubernetes.io/projected/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-kube-api-access-s48dw\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.571334 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/556757b9-1c0e-4bc0-8a0f-81a77ab8705b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"556757b9-1c0e-4bc0-8a0f-81a77ab8705b\") " pod="openstack/nova-metadata-0" Dec 12 04:56:32 crc kubenswrapper[4796]: I1212 04:56:32.632234 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 12 04:56:33 crc kubenswrapper[4796]: I1212 04:56:33.176797 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 12 04:56:33 crc kubenswrapper[4796]: W1212 04:56:33.180121 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod556757b9_1c0e_4bc0_8a0f_81a77ab8705b.slice/crio-d7fbdaaf8b40a69021529f125bfca68a4dac45dd1a32aa0c83f4866f5fd41b4f WatchSource:0}: Error finding container d7fbdaaf8b40a69021529f125bfca68a4dac45dd1a32aa0c83f4866f5fd41b4f: Status 404 returned error can't find the container with id d7fbdaaf8b40a69021529f125bfca68a4dac45dd1a32aa0c83f4866f5fd41b4f Dec 12 04:56:33 crc kubenswrapper[4796]: I1212 04:56:33.422667 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea26373-27c8-4cf2-999f-a20004ce50c3" path="/var/lib/kubelet/pods/dea26373-27c8-4cf2-999f-a20004ce50c3/volumes" Dec 12 04:56:33 crc kubenswrapper[4796]: I1212 04:56:33.948699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"556757b9-1c0e-4bc0-8a0f-81a77ab8705b","Type":"ContainerStarted","Data":"7e7b86201d066731ccb3735cb953368d601f7cdc73b82d3f588c994e0325c978"} Dec 12 04:56:33 crc kubenswrapper[4796]: I1212 04:56:33.949012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"556757b9-1c0e-4bc0-8a0f-81a77ab8705b","Type":"ContainerStarted","Data":"d7fbdaaf8b40a69021529f125bfca68a4dac45dd1a32aa0c83f4866f5fd41b4f"} Dec 12 04:56:34 crc kubenswrapper[4796]: I1212 04:56:34.273986 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 12 04:56:34 crc kubenswrapper[4796]: I1212 04:56:34.975677 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"556757b9-1c0e-4bc0-8a0f-81a77ab8705b","Type":"ContainerStarted","Data":"f22a8eb398b7925cff5c91cf3ee9bcb467b20ef6524ec136f0792145589d97d7"} Dec 12 04:56:35 crc kubenswrapper[4796]: I1212 04:56:35.014328 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.014310151 podStartE2EDuration="3.014310151s" podCreationTimestamp="2025-12-12 04:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:56:35.006700655 +0000 UTC m=+1385.882717802" watchObservedRunningTime="2025-12-12 04:56:35.014310151 +0000 UTC m=+1385.890327298" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.018617 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.020388 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.022222 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.110570 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.110627 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.115750 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.632383 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 04:56:37 crc kubenswrapper[4796]: I1212 04:56:37.632443 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 12 04:56:38 crc kubenswrapper[4796]: I1212 04:56:38.265707 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 04:56:38 crc kubenswrapper[4796]: I1212 04:56:38.265759 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 12 04:56:39 crc kubenswrapper[4796]: I1212 04:56:39.274235 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 12 04:56:39 crc kubenswrapper[4796]: I1212 04:56:39.280514 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f70e0dab-4b7c-4e6f-b28e-76e72492ca1d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:39 crc kubenswrapper[4796]: I1212 04:56:39.280594 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f70e0dab-4b7c-4e6f-b28e-76e72492ca1d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:39 crc kubenswrapper[4796]: I1212 04:56:39.302352 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 12 04:56:40 crc kubenswrapper[4796]: I1212 04:56:40.043384 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 12 04:56:42 crc kubenswrapper[4796]: I1212 04:56:42.632915 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 04:56:42 crc kubenswrapper[4796]: I1212 04:56:42.633263 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 12 04:56:43 crc kubenswrapper[4796]: I1212 04:56:43.647576 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="556757b9-1c0e-4bc0-8a0f-81a77ab8705b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:43 crc kubenswrapper[4796]: I1212 04:56:43.647896 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="556757b9-1c0e-4bc0-8a0f-81a77ab8705b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 04:56:47 crc kubenswrapper[4796]: I1212 04:56:47.018953 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb55bccb4-z8p6q" podUID="7913672c-384c-472c-89a8-0d546f345a28" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 12 04:56:47 crc kubenswrapper[4796]: I1212 04:56:47.097487 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:56:48 crc kubenswrapper[4796]: I1212 04:56:48.271940 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 04:56:48 crc kubenswrapper[4796]: I1212 04:56:48.272804 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 04:56:48 crc kubenswrapper[4796]: I1212 04:56:48.274493 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 12 04:56:48 crc kubenswrapper[4796]: I1212 04:56:48.279413 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 04:56:48 crc kubenswrapper[4796]: I1212 04:56:48.979242 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 12 04:56:49 crc kubenswrapper[4796]: I1212 04:56:49.096443 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 12 04:56:49 crc kubenswrapper[4796]: I1212 04:56:49.110534 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 12 04:56:52 crc kubenswrapper[4796]: I1212 04:56:52.640339 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 04:56:52 crc kubenswrapper[4796]: I1212 04:56:52.642824 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 12 04:56:52 crc kubenswrapper[4796]: I1212 04:56:52.645916 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 04:56:53 crc kubenswrapper[4796]: I1212 04:56:53.135783 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 12 04:56:59 crc kubenswrapper[4796]: I1212 04:56:59.200763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:56:59 crc kubenswrapper[4796]: I1212 04:56:59.204573 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:57:01 crc kubenswrapper[4796]: I1212 04:57:01.244173 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:57:01 crc kubenswrapper[4796]: I1212 04:57:01.248926 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cb55bccb4-z8p6q" Dec 12 04:57:01 crc kubenswrapper[4796]: I1212 04:57:01.350113 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67764d6b9b-h7fdk"] Dec 12 04:57:02 crc kubenswrapper[4796]: I1212 04:57:02.208770 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon-log" containerID="cri-o://c2e1ad3cacace7f7a4135db5631dba25cffc7f212d2b2651269d02773d710dcb" gracePeriod=30 Dec 12 04:57:02 crc kubenswrapper[4796]: I1212 04:57:02.208874 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" containerID="cri-o://1403caedf16585011344533a763725c13ff01db71c0904473a957acd25e5a3ff" gracePeriod=30 Dec 12 04:57:02 crc kubenswrapper[4796]: I1212 04:57:02.970121 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:57:02 crc kubenswrapper[4796]: I1212 04:57:02.970189 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:57:06 crc kubenswrapper[4796]: I1212 04:57:06.251836 4796 generic.go:334] "Generic (PLEG): container finished" podID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerID="1403caedf16585011344533a763725c13ff01db71c0904473a957acd25e5a3ff" exitCode=0 Dec 12 04:57:06 crc kubenswrapper[4796]: I1212 04:57:06.251940 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerDied","Data":"1403caedf16585011344533a763725c13ff01db71c0904473a957acd25e5a3ff"} Dec 12 04:57:06 crc kubenswrapper[4796]: I1212 04:57:06.253963 4796 scope.go:117] "RemoveContainer" containerID="27ffa7fc276a9c228093d346b21bec0cd22db41b04cca501cb5fd0a4340fbf3c" Dec 12 04:57:07 crc kubenswrapper[4796]: I1212 04:57:07.095042 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:57:09 crc kubenswrapper[4796]: I1212 04:57:09.439073 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:57:10 crc kubenswrapper[4796]: I1212 04:57:10.845470 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:57:13 crc kubenswrapper[4796]: I1212 04:57:13.707601 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="rabbitmq" containerID="cri-o://0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549" gracePeriod=604796 Dec 12 04:57:14 crc kubenswrapper[4796]: I1212 04:57:14.979627 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="rabbitmq" containerID="cri-o://e9a38a9ac495bc917495ac896f1a02cd876fb20265c786e3038568aaa054b1fa" gracePeriod=604796 Dec 12 04:57:17 crc kubenswrapper[4796]: I1212 04:57:17.095715 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:57:18 crc kubenswrapper[4796]: I1212 04:57:18.139447 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 12 04:57:18 crc kubenswrapper[4796]: I1212 04:57:18.515484 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.235227 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.327850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f474c7f-e87c-4c21-8ebb-f0266779bceb-erlang-cookie-secret\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.327993 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-server-conf\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-tls\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328125 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-config-data\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328160 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f474c7f-e87c-4c21-8ebb-f0266779bceb-pod-info\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328264 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-plugins-conf\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328329 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrk4\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-kube-api-access-bdrk4\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328361 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-plugins\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328416 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-erlang-cookie\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.328439 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-confd\") pod \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\" (UID: \"8f474c7f-e87c-4c21-8ebb-f0266779bceb\") " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.334172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.336048 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.337103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.341722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-kube-api-access-bdrk4" (OuterVolumeSpecName: "kube-api-access-bdrk4") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "kube-api-access-bdrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.342465 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f474c7f-e87c-4c21-8ebb-f0266779bceb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.342597 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8f474c7f-e87c-4c21-8ebb-f0266779bceb-pod-info" (OuterVolumeSpecName: "pod-info") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.344412 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.354895 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.403268 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-config-data" (OuterVolumeSpecName: "config-data") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.419199 4796 generic.go:334] "Generic (PLEG): container finished" podID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerID="0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549" exitCode=0 Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.419739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f474c7f-e87c-4c21-8ebb-f0266779bceb","Type":"ContainerDied","Data":"0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549"} Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.419996 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f474c7f-e87c-4c21-8ebb-f0266779bceb","Type":"ContainerDied","Data":"70d3255715ab3f7a36aa4bf09994fb998e08d3dc921aaa8a2ada5b6f017a830d"} Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.420362 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.420379 4796 scope.go:117] "RemoveContainer" containerID="0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.435486 4796 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.435742 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrk4\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-kube-api-access-bdrk4\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.435836 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.436811 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.436958 4796 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f474c7f-e87c-4c21-8ebb-f0266779bceb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.437072 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.437214 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.437353 4796 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f474c7f-e87c-4c21-8ebb-f0266779bceb-pod-info\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.437497 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.477748 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.477778 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-server-conf" (OuterVolumeSpecName: "server-conf") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.512748 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8f474c7f-e87c-4c21-8ebb-f0266779bceb" (UID: "8f474c7f-e87c-4c21-8ebb-f0266779bceb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.520119 4796 scope.go:117] "RemoveContainer" containerID="dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.540123 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f474c7f-e87c-4c21-8ebb-f0266779bceb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.540161 4796 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f474c7f-e87c-4c21-8ebb-f0266779bceb-server-conf\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.540171 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.555736 4796 scope.go:117] "RemoveContainer" containerID="0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549" Dec 12 04:57:20 crc kubenswrapper[4796]: E1212 04:57:20.556162 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549\": container with ID starting with 0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549 not found: ID does not exist" containerID="0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.556191 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549"} err="failed to get container status \"0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549\": rpc error: code = NotFound desc = could not find container \"0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549\": container with ID starting with 0438e40c33c4b255d401f4f821233f9179ea1d1fc04be5cf6e8be181732d2549 not found: ID does not exist" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.556247 4796 scope.go:117] "RemoveContainer" containerID="dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681" Dec 12 04:57:20 crc kubenswrapper[4796]: E1212 04:57:20.556858 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681\": container with ID starting with dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681 not found: ID does not exist" containerID="dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.556878 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681"} err="failed to get container status \"dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681\": rpc error: code = NotFound desc = could not find container \"dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681\": container with ID starting with dbef1f5fab3617249b4d49f7146178805fc0a56ef71c12cff2dcf7382b8f9681 not found: ID does not exist" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.753879 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.766578 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.791476 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:57:20 crc kubenswrapper[4796]: E1212 04:57:20.791968 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="rabbitmq" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.791991 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="rabbitmq" Dec 12 04:57:20 crc kubenswrapper[4796]: E1212 04:57:20.792013 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="setup-container" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.792022 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="setup-container" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.792276 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" containerName="rabbitmq" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.793478 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.795120 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.795357 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.795634 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.795789 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.795953 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.796076 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xsbpn" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.796131 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.844456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.844766 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.844925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.845102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.845335 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.845532 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.845685 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-config-data\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.845821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9rg\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-kube-api-access-kh9rg\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.846045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.846170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.846315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.860409 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947809 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-config-data\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947851 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh9rg\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-kube-api-access-kh9rg\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947910 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.947993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.948014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.948033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.950546 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.952171 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.952424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.952751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.953097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-config-data\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.953683 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.953691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.953989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.957186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.971357 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:20 crc kubenswrapper[4796]: I1212 04:57:20.976079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh9rg\" (UniqueName: \"kubernetes.io/projected/c4628c3c-0ba5-4dcd-b4a9-003b5dc95119-kube-api-access-kh9rg\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.009119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119\") " pod="openstack/rabbitmq-server-0" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.109616 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.428483 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f474c7f-e87c-4c21-8ebb-f0266779bceb" path="/var/lib/kubelet/pods/8f474c7f-e87c-4c21-8ebb-f0266779bceb/volumes" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.434136 4796 generic.go:334] "Generic (PLEG): container finished" podID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerID="e9a38a9ac495bc917495ac896f1a02cd876fb20265c786e3038568aaa054b1fa" exitCode=0 Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.434183 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0ec4e97-93b3-46f0-9b09-76c22a3ed215","Type":"ContainerDied","Data":"e9a38a9ac495bc917495ac896f1a02cd876fb20265c786e3038568aaa054b1fa"} Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.598887 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669418 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-confd\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669469 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-erlang-cookie-secret\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669501 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-config-data\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669575 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-plugins\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669614 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-plugins-conf\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669670 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-server-conf\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669745 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-tls\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669773 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-erlang-cookie\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669842 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-pod-info\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.669868 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv6jf\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-kube-api-access-xv6jf\") pod \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\" (UID: \"e0ec4e97-93b3-46f0-9b09-76c22a3ed215\") " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.671485 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.676889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-kube-api-access-xv6jf" (OuterVolumeSpecName: "kube-api-access-xv6jf") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "kube-api-access-xv6jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.677165 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.677531 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.697633 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.699036 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.721269 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.723999 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-config-data" (OuterVolumeSpecName: "config-data") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.738784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-pod-info" (OuterVolumeSpecName: "pod-info") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.742118 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772546 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772600 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772639 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772648 4796 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-pod-info\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772657 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv6jf\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-kube-api-access-xv6jf\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772665 4796 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772672 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772680 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.772687 4796 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.794303 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.799397 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-server-conf" (OuterVolumeSpecName: "server-conf") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.855215 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e0ec4e97-93b3-46f0-9b09-76c22a3ed215" (UID: "e0ec4e97-93b3-46f0-9b09-76c22a3ed215"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.874208 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.874238 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:21 crc kubenswrapper[4796]: I1212 04:57:21.874249 4796 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0ec4e97-93b3-46f0-9b09-76c22a3ed215-server-conf\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.445259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119","Type":"ContainerStarted","Data":"f3438dcb176d97fda3ec40246baec5b313065f2efd6f1c2baf6755cee14e4290"} Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.447210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0ec4e97-93b3-46f0-9b09-76c22a3ed215","Type":"ContainerDied","Data":"58b494214f2fc0f0a7e3dbe96cb70c41346e47015294fea2251124227a97563e"} Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.447420 4796 scope.go:117] "RemoveContainer" containerID="e9a38a9ac495bc917495ac896f1a02cd876fb20265c786e3038568aaa054b1fa" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.447321 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.488433 4796 scope.go:117] "RemoveContainer" containerID="0638c4b39def9c37b4ed634dc7f7190e375875f7342113d40c4cbff5aad06f38" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.490546 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.504600 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.512096 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:57:22 crc kubenswrapper[4796]: E1212 04:57:22.512520 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="rabbitmq" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.512532 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="rabbitmq" Dec 12 04:57:22 crc kubenswrapper[4796]: E1212 04:57:22.512563 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="setup-container" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.512568 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="setup-container" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.512760 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" containerName="rabbitmq" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.513674 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.518253 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.518501 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.518701 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.518820 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.518919 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.518925 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.519221 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9dzjt" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.544710 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586198 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d31d0723-e71d-4ec0-89e8-645a248d9add-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586516 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586669 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586707 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hnj\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-kube-api-access-x8hnj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586803 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586859 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586893 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d31d0723-e71d-4ec0-89e8-645a248d9add-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.586937 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688374 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688416 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688432 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hnj\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-kube-api-access-x8hnj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689150 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d31d0723-e71d-4ec0-89e8-645a248d9add-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d31d0723-e71d-4ec0-89e8-645a248d9add-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689069 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689466 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.689867 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.688875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.690165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.690691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d31d0723-e71d-4ec0-89e8-645a248d9add-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.694259 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d31d0723-e71d-4ec0-89e8-645a248d9add-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.695009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.695888 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d31d0723-e71d-4ec0-89e8-645a248d9add-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.702761 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.710960 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hnj\" (UniqueName: \"kubernetes.io/projected/d31d0723-e71d-4ec0-89e8-645a248d9add-kube-api-access-x8hnj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.809480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d31d0723-e71d-4ec0-89e8-645a248d9add\") " pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:22 crc kubenswrapper[4796]: I1212 04:57:22.875309 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.346078 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.431157 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ec4e97-93b3-46f0-9b09-76c22a3ed215" path="/var/lib/kubelet/pods/e0ec4e97-93b3-46f0-9b09-76c22a3ed215/volumes" Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.470094 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119","Type":"ContainerStarted","Data":"b8aae20b7b4a84e062cc691ab811c2abb2957ef95ed91808bafb36670d6cb7d7"} Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.475062 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d31d0723-e71d-4ec0-89e8-645a248d9add","Type":"ContainerStarted","Data":"9ba6665f4052c7af6ed905b12b563723fd17fd921fdd4192937e9bbc85bc738d"} Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.915739 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r47"] Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.918525 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.923250 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 12 04:57:23 crc kubenswrapper[4796]: I1212 04:57:23.923836 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r47"] Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.018449 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.018714 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-svc\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.018811 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.018908 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.019006 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.019155 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpqbp\" (UniqueName: \"kubernetes.io/projected/714ab6ab-6afa-49c0-8218-7b9d565e5667-kube-api-access-hpqbp\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.019245 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-config\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.120942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.121293 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.121442 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.121705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpqbp\" (UniqueName: \"kubernetes.io/projected/714ab6ab-6afa-49c0-8218-7b9d565e5667-kube-api-access-hpqbp\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.121852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-config\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.122013 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.122035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.122245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-svc\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.122622 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.122947 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-config\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.122976 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.123091 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-svc\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.123378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.142100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpqbp\" (UniqueName: \"kubernetes.io/projected/714ab6ab-6afa-49c0-8218-7b9d565e5667-kube-api-access-hpqbp\") pod \"dnsmasq-dns-d558885bc-j4r47\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.239903 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:24 crc kubenswrapper[4796]: I1212 04:57:24.818684 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r47"] Dec 12 04:57:25 crc kubenswrapper[4796]: I1212 04:57:25.509940 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d31d0723-e71d-4ec0-89e8-645a248d9add","Type":"ContainerStarted","Data":"b2ebc120ea009815e4616e8126a192fd467256bc59af98025aabb237896e3bbb"} Dec 12 04:57:25 crc kubenswrapper[4796]: I1212 04:57:25.516423 4796 generic.go:334] "Generic (PLEG): container finished" podID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerID="a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36" exitCode=0 Dec 12 04:57:25 crc kubenswrapper[4796]: I1212 04:57:25.516500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r47" event={"ID":"714ab6ab-6afa-49c0-8218-7b9d565e5667","Type":"ContainerDied","Data":"a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36"} Dec 12 04:57:25 crc kubenswrapper[4796]: I1212 04:57:25.516529 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r47" event={"ID":"714ab6ab-6afa-49c0-8218-7b9d565e5667","Type":"ContainerStarted","Data":"268767cfb6d05e495b67d4a195a628d07d37cac19a67d47d40a59009e604f605"} Dec 12 04:57:26 crc kubenswrapper[4796]: I1212 04:57:26.529093 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r47" event={"ID":"714ab6ab-6afa-49c0-8218-7b9d565e5667","Type":"ContainerStarted","Data":"233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81"} Dec 12 04:57:26 crc kubenswrapper[4796]: I1212 04:57:26.529689 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:26 crc kubenswrapper[4796]: I1212 04:57:26.552478 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-j4r47" podStartSLOduration=3.552463276 podStartE2EDuration="3.552463276s" podCreationTimestamp="2025-12-12 04:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:57:26.548652066 +0000 UTC m=+1437.424669213" watchObservedRunningTime="2025-12-12 04:57:26.552463276 +0000 UTC m=+1437.428480423" Dec 12 04:57:27 crc kubenswrapper[4796]: I1212 04:57:27.095352 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67764d6b9b-h7fdk" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 12 04:57:27 crc kubenswrapper[4796]: I1212 04:57:27.095619 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.592079 4796 generic.go:334] "Generic (PLEG): container finished" podID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerID="c2e1ad3cacace7f7a4135db5631dba25cffc7f212d2b2651269d02773d710dcb" exitCode=137 Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.592155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerDied","Data":"c2e1ad3cacace7f7a4135db5631dba25cffc7f212d2b2651269d02773d710dcb"} Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.671812 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800586 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwqhc\" (UniqueName: \"kubernetes.io/projected/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-kube-api-access-xwqhc\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-logs\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800713 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-combined-ca-bundle\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800734 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-secret-key\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800769 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-tls-certs\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-config-data\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.800955 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-scripts\") pod \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\" (UID: \"a9dd4b9b-2536-495d-bc5c-c3260fa7289a\") " Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.801375 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-logs" (OuterVolumeSpecName: "logs") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.801606 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-logs\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.815638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-kube-api-access-xwqhc" (OuterVolumeSpecName: "kube-api-access-xwqhc") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "kube-api-access-xwqhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.816624 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.825120 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-config-data" (OuterVolumeSpecName: "config-data") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.839109 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-scripts" (OuterVolumeSpecName: "scripts") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.850822 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.882079 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "a9dd4b9b-2536-495d-bc5c-c3260fa7289a" (UID: "a9dd4b9b-2536-495d-bc5c-c3260fa7289a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.902757 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-scripts\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.903013 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwqhc\" (UniqueName: \"kubernetes.io/projected/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-kube-api-access-xwqhc\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.903090 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.903162 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.903252 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.903351 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9dd4b9b-2536-495d-bc5c-c3260fa7289a-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.969802 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:57:32 crc kubenswrapper[4796]: I1212 04:57:32.969867 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:57:33 crc kubenswrapper[4796]: I1212 04:57:33.609684 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67764d6b9b-h7fdk" event={"ID":"a9dd4b9b-2536-495d-bc5c-c3260fa7289a","Type":"ContainerDied","Data":"887db9a6da6f04a7de5440932eea4c5eb76f1b16cbddb9c5a41123cb13804e74"} Dec 12 04:57:33 crc kubenswrapper[4796]: I1212 04:57:33.609738 4796 scope.go:117] "RemoveContainer" containerID="1403caedf16585011344533a763725c13ff01db71c0904473a957acd25e5a3ff" Dec 12 04:57:33 crc kubenswrapper[4796]: I1212 04:57:33.609885 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67764d6b9b-h7fdk" Dec 12 04:57:33 crc kubenswrapper[4796]: I1212 04:57:33.661444 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67764d6b9b-h7fdk"] Dec 12 04:57:33 crc kubenswrapper[4796]: I1212 04:57:33.677046 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67764d6b9b-h7fdk"] Dec 12 04:57:33 crc kubenswrapper[4796]: I1212 04:57:33.875292 4796 scope.go:117] "RemoveContainer" containerID="c2e1ad3cacace7f7a4135db5631dba25cffc7f212d2b2651269d02773d710dcb" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.242486 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.317384 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-j4lm4"] Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.317846 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" podUID="306331bd-a744-4aa2-8736-e253119cd622" containerName="dnsmasq-dns" containerID="cri-o://f5dd80b8c6604503c4c8d65ed2ea620db02113e3a305668630aa62cc90746740" gracePeriod=10 Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.504395 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d47554775-pbd74"] Dec 12 04:57:34 crc kubenswrapper[4796]: E1212 04:57:34.504806 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.504822 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: E1212 04:57:34.504830 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.504837 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: E1212 04:57:34.504857 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.504862 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: E1212 04:57:34.504882 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon-log" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.504887 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon-log" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.505049 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.505060 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.505074 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.505090 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon-log" Dec 12 04:57:34 crc kubenswrapper[4796]: E1212 04:57:34.505292 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.505300 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.505481 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" containerName="horizon" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.506049 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.554218 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d47554775-pbd74"] Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.633746 4796 generic.go:334] "Generic (PLEG): container finished" podID="306331bd-a744-4aa2-8736-e253119cd622" containerID="f5dd80b8c6604503c4c8d65ed2ea620db02113e3a305668630aa62cc90746740" exitCode=0 Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.633789 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" event={"ID":"306331bd-a744-4aa2-8736-e253119cd622","Type":"ContainerDied","Data":"f5dd80b8c6604503c4c8d65ed2ea620db02113e3a305668630aa62cc90746740"} Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648411 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrn8\" (UniqueName: \"kubernetes.io/projected/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-kube-api-access-mwrn8\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648455 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648488 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-dns-svc\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648692 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.648867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-config\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrn8\" (UniqueName: \"kubernetes.io/projected/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-kube-api-access-mwrn8\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750498 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-dns-svc\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750550 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.750618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-config\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.751786 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.751859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-dns-svc\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.751911 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.751967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.752390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.752630 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-config\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.794438 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrn8\" (UniqueName: \"kubernetes.io/projected/ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7-kube-api-access-mwrn8\") pod \"dnsmasq-dns-6d47554775-pbd74\" (UID: \"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7\") " pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:34 crc kubenswrapper[4796]: I1212 04:57:34.837829 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.047902 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.172272 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddngd\" (UniqueName: \"kubernetes.io/projected/306331bd-a744-4aa2-8736-e253119cd622-kube-api-access-ddngd\") pod \"306331bd-a744-4aa2-8736-e253119cd622\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.172361 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-swift-storage-0\") pod \"306331bd-a744-4aa2-8736-e253119cd622\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.172385 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-sb\") pod \"306331bd-a744-4aa2-8736-e253119cd622\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.172530 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-config\") pod \"306331bd-a744-4aa2-8736-e253119cd622\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.172545 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-svc\") pod \"306331bd-a744-4aa2-8736-e253119cd622\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.172596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-nb\") pod \"306331bd-a744-4aa2-8736-e253119cd622\" (UID: \"306331bd-a744-4aa2-8736-e253119cd622\") " Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.190687 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306331bd-a744-4aa2-8736-e253119cd622-kube-api-access-ddngd" (OuterVolumeSpecName: "kube-api-access-ddngd") pod "306331bd-a744-4aa2-8736-e253119cd622" (UID: "306331bd-a744-4aa2-8736-e253119cd622"). InnerVolumeSpecName "kube-api-access-ddngd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.275707 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddngd\" (UniqueName: \"kubernetes.io/projected/306331bd-a744-4aa2-8736-e253119cd622-kube-api-access-ddngd\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.309989 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "306331bd-a744-4aa2-8736-e253119cd622" (UID: "306331bd-a744-4aa2-8736-e253119cd622"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.318237 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "306331bd-a744-4aa2-8736-e253119cd622" (UID: "306331bd-a744-4aa2-8736-e253119cd622"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.352699 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "306331bd-a744-4aa2-8736-e253119cd622" (UID: "306331bd-a744-4aa2-8736-e253119cd622"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.363708 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "306331bd-a744-4aa2-8736-e253119cd622" (UID: "306331bd-a744-4aa2-8736-e253119cd622"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.373120 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-config" (OuterVolumeSpecName: "config") pod "306331bd-a744-4aa2-8736-e253119cd622" (UID: "306331bd-a744-4aa2-8736-e253119cd622"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.381476 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.381506 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.381516 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.381525 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.381533 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/306331bd-a744-4aa2-8736-e253119cd622-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.421318 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9dd4b9b-2536-495d-bc5c-c3260fa7289a" path="/var/lib/kubelet/pods/a9dd4b9b-2536-495d-bc5c-c3260fa7289a/volumes" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.547063 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d47554775-pbd74"] Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.642957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" event={"ID":"306331bd-a744-4aa2-8736-e253119cd622","Type":"ContainerDied","Data":"1e7c86fbc585898f225fb2640409057ecc48000b519a38ea80d6de33ff4838b9"} Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.644216 4796 scope.go:117] "RemoveContainer" containerID="f5dd80b8c6604503c4c8d65ed2ea620db02113e3a305668630aa62cc90746740" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.644443 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-j4lm4" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.659546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d47554775-pbd74" event={"ID":"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7","Type":"ContainerStarted","Data":"4f3fd71a974fe8c59b2e396d4b7007ae94dacef295411ad130db76c5a2ec1436"} Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.724542 4796 scope.go:117] "RemoveContainer" containerID="f031963b565fec334496e122754415d2636e3bbd33ca0988c7ce634e3e8e572c" Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.767120 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-j4lm4"] Dec 12 04:57:35 crc kubenswrapper[4796]: I1212 04:57:35.778903 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-j4lm4"] Dec 12 04:57:36 crc kubenswrapper[4796]: I1212 04:57:36.669892 4796 generic.go:334] "Generic (PLEG): container finished" podID="ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7" containerID="0d78f0ae95c835d4ffcb4f187745657e1803efba18bbc2b05b8bf2f8e2c7c27a" exitCode=0 Dec 12 04:57:36 crc kubenswrapper[4796]: I1212 04:57:36.669957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d47554775-pbd74" event={"ID":"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7","Type":"ContainerDied","Data":"0d78f0ae95c835d4ffcb4f187745657e1803efba18bbc2b05b8bf2f8e2c7c27a"} Dec 12 04:57:37 crc kubenswrapper[4796]: I1212 04:57:37.424728 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306331bd-a744-4aa2-8736-e253119cd622" path="/var/lib/kubelet/pods/306331bd-a744-4aa2-8736-e253119cd622/volumes" Dec 12 04:57:37 crc kubenswrapper[4796]: I1212 04:57:37.683434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d47554775-pbd74" event={"ID":"ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7","Type":"ContainerStarted","Data":"022fc5e852cbdca14e4df4a53bd16a023b2db02270c8580f78930f19c36aef12"} Dec 12 04:57:37 crc kubenswrapper[4796]: I1212 04:57:37.684121 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:37 crc kubenswrapper[4796]: I1212 04:57:37.712315 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d47554775-pbd74" podStartSLOduration=3.7122971209999998 podStartE2EDuration="3.712297121s" podCreationTimestamp="2025-12-12 04:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:57:37.705321832 +0000 UTC m=+1448.581338989" watchObservedRunningTime="2025-12-12 04:57:37.712297121 +0000 UTC m=+1448.588314268" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.626738 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n959c"] Dec 12 04:57:43 crc kubenswrapper[4796]: E1212 04:57:43.627624 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306331bd-a744-4aa2-8736-e253119cd622" containerName="init" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.627641 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="306331bd-a744-4aa2-8736-e253119cd622" containerName="init" Dec 12 04:57:43 crc kubenswrapper[4796]: E1212 04:57:43.627687 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306331bd-a744-4aa2-8736-e253119cd622" containerName="dnsmasq-dns" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.627696 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="306331bd-a744-4aa2-8736-e253119cd622" containerName="dnsmasq-dns" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.627931 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="306331bd-a744-4aa2-8736-e253119cd622" containerName="dnsmasq-dns" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.629611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.652979 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n959c"] Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.682508 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-catalog-content\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.682593 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-utilities\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.682675 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxdq\" (UniqueName: \"kubernetes.io/projected/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-kube-api-access-5rxdq\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.784151 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-catalog-content\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.784244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-utilities\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.784366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxdq\" (UniqueName: \"kubernetes.io/projected/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-kube-api-access-5rxdq\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.784689 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-catalog-content\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.785120 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-utilities\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.814561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxdq\" (UniqueName: \"kubernetes.io/projected/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-kube-api-access-5rxdq\") pod \"redhat-operators-n959c\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:43 crc kubenswrapper[4796]: I1212 04:57:43.960551 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.449632 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n959c"] Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.746158 4796 generic.go:334] "Generic (PLEG): container finished" podID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerID="0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc" exitCode=0 Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.746217 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerDied","Data":"0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc"} Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.746480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerStarted","Data":"7c809040e33d1abe343da8167332053da4aa81c2595e33b085306927ec299678"} Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.839584 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d47554775-pbd74" Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.925701 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r47"] Dec 12 04:57:44 crc kubenswrapper[4796]: I1212 04:57:44.925961 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-j4r47" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerName="dnsmasq-dns" containerID="cri-o://233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81" gracePeriod=10 Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.225362 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58987c9f79-c2xlb" podUID="80ea0a4a-0715-4d5b-be0c-e11f00e6d743" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.530792 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.630423 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpqbp\" (UniqueName: \"kubernetes.io/projected/714ab6ab-6afa-49c0-8218-7b9d565e5667-kube-api-access-hpqbp\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.630563 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-sb\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.630631 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-svc\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.630755 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-openstack-edpm-ipam\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.630778 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-swift-storage-0\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.631459 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-nb\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.631516 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-config\") pod \"714ab6ab-6afa-49c0-8218-7b9d565e5667\" (UID: \"714ab6ab-6afa-49c0-8218-7b9d565e5667\") " Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.649818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714ab6ab-6afa-49c0-8218-7b9d565e5667-kube-api-access-hpqbp" (OuterVolumeSpecName: "kube-api-access-hpqbp") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "kube-api-access-hpqbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.710459 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.740650 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpqbp\" (UniqueName: \"kubernetes.io/projected/714ab6ab-6afa-49c0-8218-7b9d565e5667-kube-api-access-hpqbp\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.742250 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.745333 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.768468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerStarted","Data":"f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44"} Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.775448 4796 generic.go:334] "Generic (PLEG): container finished" podID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerID="233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81" exitCode=0 Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.775482 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r47" event={"ID":"714ab6ab-6afa-49c0-8218-7b9d565e5667","Type":"ContainerDied","Data":"233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81"} Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.775504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-j4r47" event={"ID":"714ab6ab-6afa-49c0-8218-7b9d565e5667","Type":"ContainerDied","Data":"268767cfb6d05e495b67d4a195a628d07d37cac19a67d47d40a59009e604f605"} Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.775519 4796 scope.go:117] "RemoveContainer" containerID="233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.775630 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-j4r47" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.775908 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.781171 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-config" (OuterVolumeSpecName: "config") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.790309 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.806856 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "714ab6ab-6afa-49c0-8218-7b9d565e5667" (UID: "714ab6ab-6afa-49c0-8218-7b9d565e5667"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.840203 4796 scope.go:117] "RemoveContainer" containerID="a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.845870 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.845900 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-config\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.845909 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.845917 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.845929 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714ab6ab-6afa-49c0-8218-7b9d565e5667-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.881018 4796 scope.go:117] "RemoveContainer" containerID="233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81" Dec 12 04:57:45 crc kubenswrapper[4796]: E1212 04:57:45.884531 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81\": container with ID starting with 233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81 not found: ID does not exist" containerID="233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.884556 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81"} err="failed to get container status \"233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81\": rpc error: code = NotFound desc = could not find container \"233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81\": container with ID starting with 233d30a29c31708e961cec02fa48651143958960d6c1c5060cce2259f1861f81 not found: ID does not exist" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.884577 4796 scope.go:117] "RemoveContainer" containerID="a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36" Dec 12 04:57:45 crc kubenswrapper[4796]: E1212 04:57:45.885374 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36\": container with ID starting with a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36 not found: ID does not exist" containerID="a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36" Dec 12 04:57:45 crc kubenswrapper[4796]: I1212 04:57:45.885392 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36"} err="failed to get container status \"a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36\": rpc error: code = NotFound desc = could not find container \"a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36\": container with ID starting with a0681303c6327c50478767b35b2ee2168a8e4ccf5c2f0ae05552adb5aca36e36 not found: ID does not exist" Dec 12 04:57:46 crc kubenswrapper[4796]: I1212 04:57:46.135936 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r47"] Dec 12 04:57:46 crc kubenswrapper[4796]: I1212 04:57:46.151672 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-j4r47"] Dec 12 04:57:47 crc kubenswrapper[4796]: I1212 04:57:47.483246 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" path="/var/lib/kubelet/pods/714ab6ab-6afa-49c0-8218-7b9d565e5667/volumes" Dec 12 04:57:49 crc kubenswrapper[4796]: I1212 04:57:49.835444 4796 generic.go:334] "Generic (PLEG): container finished" podID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerID="f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44" exitCode=0 Dec 12 04:57:49 crc kubenswrapper[4796]: I1212 04:57:49.835740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerDied","Data":"f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44"} Dec 12 04:57:50 crc kubenswrapper[4796]: I1212 04:57:50.846658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerStarted","Data":"ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd"} Dec 12 04:57:50 crc kubenswrapper[4796]: I1212 04:57:50.884118 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n959c" podStartSLOduration=2.326808251 podStartE2EDuration="7.884096916s" podCreationTimestamp="2025-12-12 04:57:43 +0000 UTC" firstStartedPulling="2025-12-12 04:57:44.747900796 +0000 UTC m=+1455.623917943" lastFinishedPulling="2025-12-12 04:57:50.305189461 +0000 UTC m=+1461.181206608" observedRunningTime="2025-12-12 04:57:50.87433803 +0000 UTC m=+1461.750355197" watchObservedRunningTime="2025-12-12 04:57:50.884096916 +0000 UTC m=+1461.760114063" Dec 12 04:57:53 crc kubenswrapper[4796]: I1212 04:57:53.960993 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:53 crc kubenswrapper[4796]: I1212 04:57:53.961556 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:57:55 crc kubenswrapper[4796]: I1212 04:57:55.013519 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n959c" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="registry-server" probeResult="failure" output=< Dec 12 04:57:55 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 04:57:55 crc kubenswrapper[4796]: > Dec 12 04:57:55 crc kubenswrapper[4796]: I1212 04:57:55.925746 4796 generic.go:334] "Generic (PLEG): container finished" podID="c4628c3c-0ba5-4dcd-b4a9-003b5dc95119" containerID="b8aae20b7b4a84e062cc691ab811c2abb2957ef95ed91808bafb36670d6cb7d7" exitCode=0 Dec 12 04:57:55 crc kubenswrapper[4796]: I1212 04:57:55.925792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119","Type":"ContainerDied","Data":"b8aae20b7b4a84e062cc691ab811c2abb2957ef95ed91808bafb36670d6cb7d7"} Dec 12 04:57:56 crc kubenswrapper[4796]: I1212 04:57:56.936798 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c4628c3c-0ba5-4dcd-b4a9-003b5dc95119","Type":"ContainerStarted","Data":"87379baad984a328d4a1fd31dba8051462832f46603469ae55a1b79e21e8d7fe"} Dec 12 04:57:56 crc kubenswrapper[4796]: I1212 04:57:56.937539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 12 04:57:56 crc kubenswrapper[4796]: I1212 04:57:56.938687 4796 generic.go:334] "Generic (PLEG): container finished" podID="d31d0723-e71d-4ec0-89e8-645a248d9add" containerID="b2ebc120ea009815e4616e8126a192fd467256bc59af98025aabb237896e3bbb" exitCode=0 Dec 12 04:57:56 crc kubenswrapper[4796]: I1212 04:57:56.938718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d31d0723-e71d-4ec0-89e8-645a248d9add","Type":"ContainerDied","Data":"b2ebc120ea009815e4616e8126a192fd467256bc59af98025aabb237896e3bbb"} Dec 12 04:57:57 crc kubenswrapper[4796]: I1212 04:57:57.007366 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.007341891 podStartE2EDuration="37.007341891s" podCreationTimestamp="2025-12-12 04:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:57:56.986188919 +0000 UTC m=+1467.862206066" watchObservedRunningTime="2025-12-12 04:57:57.007341891 +0000 UTC m=+1467.883359038" Dec 12 04:57:57 crc kubenswrapper[4796]: I1212 04:57:57.948408 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d31d0723-e71d-4ec0-89e8-645a248d9add","Type":"ContainerStarted","Data":"99cc9adfa985694dbee29a68dbf63d485d052004295983257494669cce36c0b8"} Dec 12 04:57:57 crc kubenswrapper[4796]: I1212 04:57:57.949689 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:57:57 crc kubenswrapper[4796]: I1212 04:57:57.979768 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.97974888 podStartE2EDuration="35.97974888s" podCreationTimestamp="2025-12-12 04:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 04:57:57.974782725 +0000 UTC m=+1468.850799872" watchObservedRunningTime="2025-12-12 04:57:57.97974888 +0000 UTC m=+1468.855766027" Dec 12 04:58:00 crc kubenswrapper[4796]: I1212 04:58:00.468857 4796 scope.go:117] "RemoveContainer" containerID="ae3832fde56a3737fffc124308918d7f38425baa9a935c6365f0a5f287b44c4d" Dec 12 04:58:00 crc kubenswrapper[4796]: I1212 04:58:00.512647 4796 scope.go:117] "RemoveContainer" containerID="c4da76d45ace94386b384f4a759ac4788636265db5d21574e1d23b6c8ee53ac6" Dec 12 04:58:00 crc kubenswrapper[4796]: I1212 04:58:00.556886 4796 scope.go:117] "RemoveContainer" containerID="46cab9999f2975de2b571bdda08c04fdf61223bc4363395c66c4329152cf994b" Dec 12 04:58:02 crc kubenswrapper[4796]: I1212 04:58:02.969536 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 04:58:02 crc kubenswrapper[4796]: I1212 04:58:02.970678 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 04:58:02 crc kubenswrapper[4796]: I1212 04:58:02.970794 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 04:58:02 crc kubenswrapper[4796]: I1212 04:58:02.971704 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffda408d796b66de9636479ae49cc06325aa5f1abbab5ccb1554a19b15d504a1"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 04:58:02 crc kubenswrapper[4796]: I1212 04:58:02.971857 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://ffda408d796b66de9636479ae49cc06325aa5f1abbab5ccb1554a19b15d504a1" gracePeriod=600 Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.027392 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="ffda408d796b66de9636479ae49cc06325aa5f1abbab5ccb1554a19b15d504a1" exitCode=0 Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.028441 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"ffda408d796b66de9636479ae49cc06325aa5f1abbab5ccb1554a19b15d504a1"} Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.028659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5"} Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.028816 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.028814 4796 scope.go:117] "RemoveContainer" containerID="c972db73eaab2458f98bcc92148f56e7f3d05de16f8aaa63f617c41f460205f5" Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.094568 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:58:04 crc kubenswrapper[4796]: I1212 04:58:04.284196 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n959c"] Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.047536 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n959c" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="registry-server" containerID="cri-o://ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd" gracePeriod=2 Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.646113 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.807617 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-utilities\") pod \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.807892 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-catalog-content\") pod \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.808015 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rxdq\" (UniqueName: \"kubernetes.io/projected/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-kube-api-access-5rxdq\") pod \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\" (UID: \"fccc2f80-02e0-49d1-9af8-44887a1fd8a0\") " Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.808423 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-utilities" (OuterVolumeSpecName: "utilities") pod "fccc2f80-02e0-49d1-9af8-44887a1fd8a0" (UID: "fccc2f80-02e0-49d1-9af8-44887a1fd8a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.809323 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.837499 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-kube-api-access-5rxdq" (OuterVolumeSpecName: "kube-api-access-5rxdq") pod "fccc2f80-02e0-49d1-9af8-44887a1fd8a0" (UID: "fccc2f80-02e0-49d1-9af8-44887a1fd8a0"). InnerVolumeSpecName "kube-api-access-5rxdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.910855 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rxdq\" (UniqueName: \"kubernetes.io/projected/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-kube-api-access-5rxdq\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:06 crc kubenswrapper[4796]: I1212 04:58:06.954230 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fccc2f80-02e0-49d1-9af8-44887a1fd8a0" (UID: "fccc2f80-02e0-49d1-9af8-44887a1fd8a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.012759 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc2f80-02e0-49d1-9af8-44887a1fd8a0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.073640 4796 generic.go:334] "Generic (PLEG): container finished" podID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerID="ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd" exitCode=0 Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.073685 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerDied","Data":"ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd"} Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.073718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n959c" event={"ID":"fccc2f80-02e0-49d1-9af8-44887a1fd8a0","Type":"ContainerDied","Data":"7c809040e33d1abe343da8167332053da4aa81c2595e33b085306927ec299678"} Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.073745 4796 scope.go:117] "RemoveContainer" containerID="ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.073898 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n959c" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.112878 4796 scope.go:117] "RemoveContainer" containerID="f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.133102 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n959c"] Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.141252 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n959c"] Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.150192 4796 scope.go:117] "RemoveContainer" containerID="0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.192699 4796 scope.go:117] "RemoveContainer" containerID="ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd" Dec 12 04:58:07 crc kubenswrapper[4796]: E1212 04:58:07.193190 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd\": container with ID starting with ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd not found: ID does not exist" containerID="ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.193248 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd"} err="failed to get container status \"ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd\": rpc error: code = NotFound desc = could not find container \"ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd\": container with ID starting with ffa2820b9dc99f738e01f772020b25da97e2542e0dbbd6bac41b487bb4c4bfdd not found: ID does not exist" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.193361 4796 scope.go:117] "RemoveContainer" containerID="f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44" Dec 12 04:58:07 crc kubenswrapper[4796]: E1212 04:58:07.193816 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44\": container with ID starting with f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44 not found: ID does not exist" containerID="f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.193846 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44"} err="failed to get container status \"f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44\": rpc error: code = NotFound desc = could not find container \"f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44\": container with ID starting with f86dd48e223b7a184edeaee6735f9510f146b1092fe8a9fc89d3029464e8df44 not found: ID does not exist" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.193867 4796 scope.go:117] "RemoveContainer" containerID="0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc" Dec 12 04:58:07 crc kubenswrapper[4796]: E1212 04:58:07.194492 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc\": container with ID starting with 0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc not found: ID does not exist" containerID="0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.194515 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc"} err="failed to get container status \"0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc\": rpc error: code = NotFound desc = could not find container \"0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc\": container with ID starting with 0583b96f2063083ca748889e73a2de5a3fce66ad8af4b5f6913782c2f80884bc not found: ID does not exist" Dec 12 04:58:07 crc kubenswrapper[4796]: I1212 04:58:07.423004 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" path="/var/lib/kubelet/pods/fccc2f80-02e0-49d1-9af8-44887a1fd8a0/volumes" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.775733 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7"] Dec 12 04:58:08 crc kubenswrapper[4796]: E1212 04:58:08.776675 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="extract-utilities" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.776693 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="extract-utilities" Dec 12 04:58:08 crc kubenswrapper[4796]: E1212 04:58:08.776712 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="extract-content" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.776719 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="extract-content" Dec 12 04:58:08 crc kubenswrapper[4796]: E1212 04:58:08.776740 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerName="dnsmasq-dns" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.776748 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerName="dnsmasq-dns" Dec 12 04:58:08 crc kubenswrapper[4796]: E1212 04:58:08.776791 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerName="init" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.776798 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerName="init" Dec 12 04:58:08 crc kubenswrapper[4796]: E1212 04:58:08.776810 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="registry-server" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.776816 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="registry-server" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.777041 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="714ab6ab-6afa-49c0-8218-7b9d565e5667" containerName="dnsmasq-dns" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.777069 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccc2f80-02e0-49d1-9af8-44887a1fd8a0" containerName="registry-server" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.777938 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.780075 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.780137 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.780295 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.782158 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.787530 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7"] Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.952922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.952980 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.953186 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxh2\" (UniqueName: \"kubernetes.io/projected/ebb00117-c00a-49db-aeea-bcff226d7283-kube-api-access-pfxh2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:08 crc kubenswrapper[4796]: I1212 04:58:08.953246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.054659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxh2\" (UniqueName: \"kubernetes.io/projected/ebb00117-c00a-49db-aeea-bcff226d7283-kube-api-access-pfxh2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.054723 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.054828 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.054850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.061690 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.062044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.062845 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.072762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxh2\" (UniqueName: \"kubernetes.io/projected/ebb00117-c00a-49db-aeea-bcff226d7283-kube-api-access-pfxh2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.095964 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:09 crc kubenswrapper[4796]: I1212 04:58:09.711311 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7"] Dec 12 04:58:10 crc kubenswrapper[4796]: I1212 04:58:10.104206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" event={"ID":"ebb00117-c00a-49db-aeea-bcff226d7283","Type":"ContainerStarted","Data":"c674dd86e7f67c7adde59194d2d16257ca2e6b79de75e549413e3600af022a5f"} Dec 12 04:58:11 crc kubenswrapper[4796]: I1212 04:58:11.116463 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 12 04:58:12 crc kubenswrapper[4796]: I1212 04:58:12.879142 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 12 04:58:24 crc kubenswrapper[4796]: I1212 04:58:24.264307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" event={"ID":"ebb00117-c00a-49db-aeea-bcff226d7283","Type":"ContainerStarted","Data":"e32c4779706b4eb19d5801baed59039f12703a82077b26c0cd68b961ccf43f84"} Dec 12 04:58:24 crc kubenswrapper[4796]: I1212 04:58:24.307326 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" podStartSLOduration=2.894353027 podStartE2EDuration="16.30727407s" podCreationTimestamp="2025-12-12 04:58:08 +0000 UTC" firstStartedPulling="2025-12-12 04:58:09.716883261 +0000 UTC m=+1480.592900408" lastFinishedPulling="2025-12-12 04:58:23.129804304 +0000 UTC m=+1494.005821451" observedRunningTime="2025-12-12 04:58:24.30059167 +0000 UTC m=+1495.176608817" watchObservedRunningTime="2025-12-12 04:58:24.30727407 +0000 UTC m=+1495.183291227" Dec 12 04:58:35 crc kubenswrapper[4796]: I1212 04:58:35.389843 4796 generic.go:334] "Generic (PLEG): container finished" podID="ebb00117-c00a-49db-aeea-bcff226d7283" containerID="e32c4779706b4eb19d5801baed59039f12703a82077b26c0cd68b961ccf43f84" exitCode=0 Dec 12 04:58:35 crc kubenswrapper[4796]: I1212 04:58:35.390369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" event={"ID":"ebb00117-c00a-49db-aeea-bcff226d7283","Type":"ContainerDied","Data":"e32c4779706b4eb19d5801baed59039f12703a82077b26c0cd68b961ccf43f84"} Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.821598 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.981045 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-repo-setup-combined-ca-bundle\") pod \"ebb00117-c00a-49db-aeea-bcff226d7283\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.981345 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-inventory\") pod \"ebb00117-c00a-49db-aeea-bcff226d7283\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.981967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-ssh-key\") pod \"ebb00117-c00a-49db-aeea-bcff226d7283\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.982631 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfxh2\" (UniqueName: \"kubernetes.io/projected/ebb00117-c00a-49db-aeea-bcff226d7283-kube-api-access-pfxh2\") pod \"ebb00117-c00a-49db-aeea-bcff226d7283\" (UID: \"ebb00117-c00a-49db-aeea-bcff226d7283\") " Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.986803 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ebb00117-c00a-49db-aeea-bcff226d7283" (UID: "ebb00117-c00a-49db-aeea-bcff226d7283"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:58:36 crc kubenswrapper[4796]: I1212 04:58:36.987657 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb00117-c00a-49db-aeea-bcff226d7283-kube-api-access-pfxh2" (OuterVolumeSpecName: "kube-api-access-pfxh2") pod "ebb00117-c00a-49db-aeea-bcff226d7283" (UID: "ebb00117-c00a-49db-aeea-bcff226d7283"). InnerVolumeSpecName "kube-api-access-pfxh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.020428 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ebb00117-c00a-49db-aeea-bcff226d7283" (UID: "ebb00117-c00a-49db-aeea-bcff226d7283"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.022873 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-inventory" (OuterVolumeSpecName: "inventory") pod "ebb00117-c00a-49db-aeea-bcff226d7283" (UID: "ebb00117-c00a-49db-aeea-bcff226d7283"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.085297 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.085348 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfxh2\" (UniqueName: \"kubernetes.io/projected/ebb00117-c00a-49db-aeea-bcff226d7283-kube-api-access-pfxh2\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.085373 4796 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.085392 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb00117-c00a-49db-aeea-bcff226d7283-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.409475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" event={"ID":"ebb00117-c00a-49db-aeea-bcff226d7283","Type":"ContainerDied","Data":"c674dd86e7f67c7adde59194d2d16257ca2e6b79de75e549413e3600af022a5f"} Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.409814 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c674dd86e7f67c7adde59194d2d16257ca2e6b79de75e549413e3600af022a5f" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.409592 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.522192 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc"] Dec 12 04:58:37 crc kubenswrapper[4796]: E1212 04:58:37.522698 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb00117-c00a-49db-aeea-bcff226d7283" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.522717 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb00117-c00a-49db-aeea-bcff226d7283" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.522932 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb00117-c00a-49db-aeea-bcff226d7283" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.523626 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.534304 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc"] Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.571712 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.572324 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.572685 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.572983 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.695612 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.695709 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lm4z\" (UniqueName: \"kubernetes.io/projected/47c5ed15-7a61-4101-b8f4-470f53ef2a10-kube-api-access-2lm4z\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.695817 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.797768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.798549 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.798619 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lm4z\" (UniqueName: \"kubernetes.io/projected/47c5ed15-7a61-4101-b8f4-470f53ef2a10-kube-api-access-2lm4z\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.806030 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.806150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.814533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lm4z\" (UniqueName: \"kubernetes.io/projected/47c5ed15-7a61-4101-b8f4-470f53ef2a10-kube-api-access-2lm4z\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hhdhc\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:37 crc kubenswrapper[4796]: I1212 04:58:37.891416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:38 crc kubenswrapper[4796]: I1212 04:58:38.411590 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc"] Dec 12 04:58:38 crc kubenswrapper[4796]: W1212 04:58:38.423516 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c5ed15_7a61_4101_b8f4_470f53ef2a10.slice/crio-618dadc1d74f4e9a9fa02fc0e4f846a2a89b8970eb57328a4a4850e9a6c30564 WatchSource:0}: Error finding container 618dadc1d74f4e9a9fa02fc0e4f846a2a89b8970eb57328a4a4850e9a6c30564: Status 404 returned error can't find the container with id 618dadc1d74f4e9a9fa02fc0e4f846a2a89b8970eb57328a4a4850e9a6c30564 Dec 12 04:58:39 crc kubenswrapper[4796]: I1212 04:58:39.440987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" event={"ID":"47c5ed15-7a61-4101-b8f4-470f53ef2a10","Type":"ContainerStarted","Data":"846e8b38885a68e1266d4f1d6efcfa565c3376885856975578a15b83a56b218c"} Dec 12 04:58:39 crc kubenswrapper[4796]: I1212 04:58:39.441380 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" event={"ID":"47c5ed15-7a61-4101-b8f4-470f53ef2a10","Type":"ContainerStarted","Data":"618dadc1d74f4e9a9fa02fc0e4f846a2a89b8970eb57328a4a4850e9a6c30564"} Dec 12 04:58:39 crc kubenswrapper[4796]: I1212 04:58:39.466039 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" podStartSLOduration=2.30523759 podStartE2EDuration="2.466019191s" podCreationTimestamp="2025-12-12 04:58:37 +0000 UTC" firstStartedPulling="2025-12-12 04:58:38.42675592 +0000 UTC m=+1509.302773057" lastFinishedPulling="2025-12-12 04:58:38.587537521 +0000 UTC m=+1509.463554658" observedRunningTime="2025-12-12 04:58:39.457260498 +0000 UTC m=+1510.333277645" watchObservedRunningTime="2025-12-12 04:58:39.466019191 +0000 UTC m=+1510.342036338" Dec 12 04:58:41 crc kubenswrapper[4796]: I1212 04:58:41.470770 4796 generic.go:334] "Generic (PLEG): container finished" podID="47c5ed15-7a61-4101-b8f4-470f53ef2a10" containerID="846e8b38885a68e1266d4f1d6efcfa565c3376885856975578a15b83a56b218c" exitCode=0 Dec 12 04:58:41 crc kubenswrapper[4796]: I1212 04:58:41.470863 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" event={"ID":"47c5ed15-7a61-4101-b8f4-470f53ef2a10","Type":"ContainerDied","Data":"846e8b38885a68e1266d4f1d6efcfa565c3376885856975578a15b83a56b218c"} Dec 12 04:58:42 crc kubenswrapper[4796]: I1212 04:58:42.936165 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.102462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-inventory\") pod \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.102986 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lm4z\" (UniqueName: \"kubernetes.io/projected/47c5ed15-7a61-4101-b8f4-470f53ef2a10-kube-api-access-2lm4z\") pod \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.103148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-ssh-key\") pod \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\" (UID: \"47c5ed15-7a61-4101-b8f4-470f53ef2a10\") " Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.111674 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c5ed15-7a61-4101-b8f4-470f53ef2a10-kube-api-access-2lm4z" (OuterVolumeSpecName: "kube-api-access-2lm4z") pod "47c5ed15-7a61-4101-b8f4-470f53ef2a10" (UID: "47c5ed15-7a61-4101-b8f4-470f53ef2a10"). InnerVolumeSpecName "kube-api-access-2lm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.131620 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "47c5ed15-7a61-4101-b8f4-470f53ef2a10" (UID: "47c5ed15-7a61-4101-b8f4-470f53ef2a10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.132747 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-inventory" (OuterVolumeSpecName: "inventory") pod "47c5ed15-7a61-4101-b8f4-470f53ef2a10" (UID: "47c5ed15-7a61-4101-b8f4-470f53ef2a10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.205568 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.205596 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47c5ed15-7a61-4101-b8f4-470f53ef2a10-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.205610 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lm4z\" (UniqueName: \"kubernetes.io/projected/47c5ed15-7a61-4101-b8f4-470f53ef2a10-kube-api-access-2lm4z\") on node \"crc\" DevicePath \"\"" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.532993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" event={"ID":"47c5ed15-7a61-4101-b8f4-470f53ef2a10","Type":"ContainerDied","Data":"618dadc1d74f4e9a9fa02fc0e4f846a2a89b8970eb57328a4a4850e9a6c30564"} Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.533322 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618dadc1d74f4e9a9fa02fc0e4f846a2a89b8970eb57328a4a4850e9a6c30564" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.533374 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hhdhc" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.607554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6"] Dec 12 04:58:43 crc kubenswrapper[4796]: E1212 04:58:43.608011 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c5ed15-7a61-4101-b8f4-470f53ef2a10" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.608026 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c5ed15-7a61-4101-b8f4-470f53ef2a10" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.608199 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c5ed15-7a61-4101-b8f4-470f53ef2a10" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.608800 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.613101 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.614270 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.615142 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.615322 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.627794 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6"] Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.729950 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.730073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.730135 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbzq\" (UniqueName: \"kubernetes.io/projected/86779d4a-5602-4b32-8e50-cd72fac17e8a-kube-api-access-hdbzq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.730215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.832155 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.832254 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.832327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdbzq\" (UniqueName: \"kubernetes.io/projected/86779d4a-5602-4b32-8e50-cd72fac17e8a-kube-api-access-hdbzq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.832410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.836257 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.838036 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.848247 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.858950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdbzq\" (UniqueName: \"kubernetes.io/projected/86779d4a-5602-4b32-8e50-cd72fac17e8a-kube-api-access-hdbzq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:43 crc kubenswrapper[4796]: I1212 04:58:43.947515 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 04:58:44 crc kubenswrapper[4796]: I1212 04:58:44.490105 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6"] Dec 12 04:58:44 crc kubenswrapper[4796]: I1212 04:58:44.541676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" event={"ID":"86779d4a-5602-4b32-8e50-cd72fac17e8a","Type":"ContainerStarted","Data":"d0aefa8fb67d396e1c0ffbb546b6812a4c919024bf7ad142e77086a526785b89"} Dec 12 04:58:45 crc kubenswrapper[4796]: I1212 04:58:45.551067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" event={"ID":"86779d4a-5602-4b32-8e50-cd72fac17e8a","Type":"ContainerStarted","Data":"4d2bc98530928465ce8728ec97e74cffcfdd25b882f787e5cca562966ca18008"} Dec 12 04:58:45 crc kubenswrapper[4796]: I1212 04:58:45.569289 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" podStartSLOduration=2.392164928 podStartE2EDuration="2.56926873s" podCreationTimestamp="2025-12-12 04:58:43 +0000 UTC" firstStartedPulling="2025-12-12 04:58:44.491876346 +0000 UTC m=+1515.367893493" lastFinishedPulling="2025-12-12 04:58:44.668980148 +0000 UTC m=+1515.544997295" observedRunningTime="2025-12-12 04:58:45.566733261 +0000 UTC m=+1516.442750408" watchObservedRunningTime="2025-12-12 04:58:45.56926873 +0000 UTC m=+1516.445285887" Dec 12 04:59:00 crc kubenswrapper[4796]: I1212 04:59:00.719139 4796 scope.go:117] "RemoveContainer" containerID="f87341611826664cf74a9dff24dbf0d1dd36ecc0f6f0d2058621fbcf8ce02f0d" Dec 12 04:59:07 crc kubenswrapper[4796]: I1212 04:59:07.901344 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nt2tl"] Dec 12 04:59:07 crc kubenswrapper[4796]: I1212 04:59:07.904198 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:07 crc kubenswrapper[4796]: I1212 04:59:07.916762 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt2tl"] Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.003826 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-catalog-content\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.003922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-utilities\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.004017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctkm\" (UniqueName: \"kubernetes.io/projected/3be76b07-d3bd-4ffe-9a41-bf4de057b533-kube-api-access-2ctkm\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.106990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctkm\" (UniqueName: \"kubernetes.io/projected/3be76b07-d3bd-4ffe-9a41-bf4de057b533-kube-api-access-2ctkm\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.107137 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-catalog-content\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.107221 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-utilities\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.107684 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-utilities\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.107896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-catalog-content\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.125113 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctkm\" (UniqueName: \"kubernetes.io/projected/3be76b07-d3bd-4ffe-9a41-bf4de057b533-kube-api-access-2ctkm\") pod \"redhat-marketplace-nt2tl\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.231651 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.689819 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt2tl"] Dec 12 04:59:08 crc kubenswrapper[4796]: I1212 04:59:08.781236 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerStarted","Data":"ece51b41d9558aea3cc6c56408fc9fb0af440af783845051526bdf536bfef28a"} Dec 12 04:59:09 crc kubenswrapper[4796]: I1212 04:59:09.792193 4796 generic.go:334] "Generic (PLEG): container finished" podID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerID="f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752" exitCode=0 Dec 12 04:59:09 crc kubenswrapper[4796]: I1212 04:59:09.792333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerDied","Data":"f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752"} Dec 12 04:59:10 crc kubenswrapper[4796]: I1212 04:59:10.802606 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerStarted","Data":"926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7"} Dec 12 04:59:11 crc kubenswrapper[4796]: I1212 04:59:11.812611 4796 generic.go:334] "Generic (PLEG): container finished" podID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerID="926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7" exitCode=0 Dec 12 04:59:11 crc kubenswrapper[4796]: I1212 04:59:11.812653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerDied","Data":"926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7"} Dec 12 04:59:12 crc kubenswrapper[4796]: I1212 04:59:12.824773 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerStarted","Data":"97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c"} Dec 12 04:59:12 crc kubenswrapper[4796]: I1212 04:59:12.851960 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nt2tl" podStartSLOduration=3.39146759 podStartE2EDuration="5.851932666s" podCreationTimestamp="2025-12-12 04:59:07 +0000 UTC" firstStartedPulling="2025-12-12 04:59:09.795063087 +0000 UTC m=+1540.671080234" lastFinishedPulling="2025-12-12 04:59:12.255528163 +0000 UTC m=+1543.131545310" observedRunningTime="2025-12-12 04:59:12.847079494 +0000 UTC m=+1543.723096641" watchObservedRunningTime="2025-12-12 04:59:12.851932666 +0000 UTC m=+1543.727949833" Dec 12 04:59:18 crc kubenswrapper[4796]: I1212 04:59:18.232156 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:18 crc kubenswrapper[4796]: I1212 04:59:18.232977 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:18 crc kubenswrapper[4796]: I1212 04:59:18.287019 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:19 crc kubenswrapper[4796]: I1212 04:59:19.143585 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:19 crc kubenswrapper[4796]: I1212 04:59:19.206958 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt2tl"] Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.105138 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nt2tl" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="registry-server" containerID="cri-o://97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c" gracePeriod=2 Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.616621 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.771301 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctkm\" (UniqueName: \"kubernetes.io/projected/3be76b07-d3bd-4ffe-9a41-bf4de057b533-kube-api-access-2ctkm\") pod \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.771431 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-catalog-content\") pod \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.771479 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-utilities\") pod \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\" (UID: \"3be76b07-d3bd-4ffe-9a41-bf4de057b533\") " Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.772594 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-utilities" (OuterVolumeSpecName: "utilities") pod "3be76b07-d3bd-4ffe-9a41-bf4de057b533" (UID: "3be76b07-d3bd-4ffe-9a41-bf4de057b533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.777567 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be76b07-d3bd-4ffe-9a41-bf4de057b533-kube-api-access-2ctkm" (OuterVolumeSpecName: "kube-api-access-2ctkm") pod "3be76b07-d3bd-4ffe-9a41-bf4de057b533" (UID: "3be76b07-d3bd-4ffe-9a41-bf4de057b533"). InnerVolumeSpecName "kube-api-access-2ctkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.802629 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3be76b07-d3bd-4ffe-9a41-bf4de057b533" (UID: "3be76b07-d3bd-4ffe-9a41-bf4de057b533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.873262 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.873306 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ctkm\" (UniqueName: \"kubernetes.io/projected/3be76b07-d3bd-4ffe-9a41-bf4de057b533-kube-api-access-2ctkm\") on node \"crc\" DevicePath \"\"" Dec 12 04:59:21 crc kubenswrapper[4796]: I1212 04:59:21.873317 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3be76b07-d3bd-4ffe-9a41-bf4de057b533-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.114949 4796 generic.go:334] "Generic (PLEG): container finished" podID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerID="97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c" exitCode=0 Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.115035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerDied","Data":"97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c"} Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.115063 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt2tl" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.115898 4796 scope.go:117] "RemoveContainer" containerID="97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.115831 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt2tl" event={"ID":"3be76b07-d3bd-4ffe-9a41-bf4de057b533","Type":"ContainerDied","Data":"ece51b41d9558aea3cc6c56408fc9fb0af440af783845051526bdf536bfef28a"} Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.136996 4796 scope.go:117] "RemoveContainer" containerID="926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.159759 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt2tl"] Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.169354 4796 scope.go:117] "RemoveContainer" containerID="f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.169752 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt2tl"] Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.211934 4796 scope.go:117] "RemoveContainer" containerID="97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c" Dec 12 04:59:22 crc kubenswrapper[4796]: E1212 04:59:22.212469 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c\": container with ID starting with 97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c not found: ID does not exist" containerID="97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.212531 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c"} err="failed to get container status \"97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c\": rpc error: code = NotFound desc = could not find container \"97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c\": container with ID starting with 97d1ceb422cce6c489e2088829c4eb6988620271465fb8b99bfe0ec30c373d1c not found: ID does not exist" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.212576 4796 scope.go:117] "RemoveContainer" containerID="926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7" Dec 12 04:59:22 crc kubenswrapper[4796]: E1212 04:59:22.213084 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7\": container with ID starting with 926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7 not found: ID does not exist" containerID="926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.213120 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7"} err="failed to get container status \"926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7\": rpc error: code = NotFound desc = could not find container \"926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7\": container with ID starting with 926694d01429b9ad668f7fadabb4e5540cc59b5c284c37630b8d54aab0f175b7 not found: ID does not exist" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.213142 4796 scope.go:117] "RemoveContainer" containerID="f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752" Dec 12 04:59:22 crc kubenswrapper[4796]: E1212 04:59:22.213491 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752\": container with ID starting with f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752 not found: ID does not exist" containerID="f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752" Dec 12 04:59:22 crc kubenswrapper[4796]: I1212 04:59:22.213519 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752"} err="failed to get container status \"f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752\": rpc error: code = NotFound desc = could not find container \"f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752\": container with ID starting with f91d162e321fa07ca55e6468eee9225dcdc3319cd96bc47bb8ba43acb0d63752 not found: ID does not exist" Dec 12 04:59:23 crc kubenswrapper[4796]: I1212 04:59:23.426564 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" path="/var/lib/kubelet/pods/3be76b07-d3bd-4ffe-9a41-bf4de057b533/volumes" Dec 12 04:59:32 crc kubenswrapper[4796]: I1212 04:59:32.970517 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l5qh2"] Dec 12 04:59:32 crc kubenswrapper[4796]: E1212 04:59:32.971463 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="extract-content" Dec 12 04:59:32 crc kubenswrapper[4796]: I1212 04:59:32.971479 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="extract-content" Dec 12 04:59:32 crc kubenswrapper[4796]: E1212 04:59:32.971503 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="extract-utilities" Dec 12 04:59:32 crc kubenswrapper[4796]: I1212 04:59:32.971514 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="extract-utilities" Dec 12 04:59:32 crc kubenswrapper[4796]: E1212 04:59:32.971614 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="registry-server" Dec 12 04:59:32 crc kubenswrapper[4796]: I1212 04:59:32.971628 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="registry-server" Dec 12 04:59:32 crc kubenswrapper[4796]: I1212 04:59:32.971988 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be76b07-d3bd-4ffe-9a41-bf4de057b533" containerName="registry-server" Dec 12 04:59:32 crc kubenswrapper[4796]: I1212 04:59:32.973754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.000466 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5qh2"] Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.098503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8xl\" (UniqueName: \"kubernetes.io/projected/916680c2-db9a-4583-9bf8-97f4727b1bed-kube-api-access-6k8xl\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.098604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-catalog-content\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.098637 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-utilities\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.200662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8xl\" (UniqueName: \"kubernetes.io/projected/916680c2-db9a-4583-9bf8-97f4727b1bed-kube-api-access-6k8xl\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.200771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-catalog-content\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.200813 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-utilities\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.201405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-catalog-content\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.201498 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-utilities\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.223416 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8xl\" (UniqueName: \"kubernetes.io/projected/916680c2-db9a-4583-9bf8-97f4727b1bed-kube-api-access-6k8xl\") pod \"community-operators-l5qh2\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.299380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:33 crc kubenswrapper[4796]: I1212 04:59:33.735629 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5qh2"] Dec 12 04:59:34 crc kubenswrapper[4796]: I1212 04:59:34.234984 4796 generic.go:334] "Generic (PLEG): container finished" podID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerID="a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93" exitCode=0 Dec 12 04:59:34 crc kubenswrapper[4796]: I1212 04:59:34.235025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerDied","Data":"a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93"} Dec 12 04:59:34 crc kubenswrapper[4796]: I1212 04:59:34.235072 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerStarted","Data":"85ed64e3125b1dfc3f3e13e0400598873fcec927d74577bd44a7358b0466deda"} Dec 12 04:59:36 crc kubenswrapper[4796]: I1212 04:59:36.285773 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerStarted","Data":"f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c"} Dec 12 04:59:37 crc kubenswrapper[4796]: I1212 04:59:37.298149 4796 generic.go:334] "Generic (PLEG): container finished" podID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerID="f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c" exitCode=0 Dec 12 04:59:37 crc kubenswrapper[4796]: I1212 04:59:37.298229 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerDied","Data":"f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c"} Dec 12 04:59:38 crc kubenswrapper[4796]: I1212 04:59:38.310632 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerStarted","Data":"67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859"} Dec 12 04:59:38 crc kubenswrapper[4796]: I1212 04:59:38.338050 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l5qh2" podStartSLOduration=2.561999107 podStartE2EDuration="6.33802525s" podCreationTimestamp="2025-12-12 04:59:32 +0000 UTC" firstStartedPulling="2025-12-12 04:59:34.236686808 +0000 UTC m=+1565.112703955" lastFinishedPulling="2025-12-12 04:59:38.012712941 +0000 UTC m=+1568.888730098" observedRunningTime="2025-12-12 04:59:38.325902142 +0000 UTC m=+1569.201919349" watchObservedRunningTime="2025-12-12 04:59:38.33802525 +0000 UTC m=+1569.214042397" Dec 12 04:59:43 crc kubenswrapper[4796]: I1212 04:59:43.300314 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:43 crc kubenswrapper[4796]: I1212 04:59:43.300797 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:43 crc kubenswrapper[4796]: I1212 04:59:43.355222 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:43 crc kubenswrapper[4796]: I1212 04:59:43.407212 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:43 crc kubenswrapper[4796]: I1212 04:59:43.592012 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5qh2"] Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.369147 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l5qh2" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="registry-server" containerID="cri-o://67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859" gracePeriod=2 Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.791699 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.931638 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8xl\" (UniqueName: \"kubernetes.io/projected/916680c2-db9a-4583-9bf8-97f4727b1bed-kube-api-access-6k8xl\") pod \"916680c2-db9a-4583-9bf8-97f4727b1bed\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.931697 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-utilities\") pod \"916680c2-db9a-4583-9bf8-97f4727b1bed\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.931769 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-catalog-content\") pod \"916680c2-db9a-4583-9bf8-97f4727b1bed\" (UID: \"916680c2-db9a-4583-9bf8-97f4727b1bed\") " Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.932563 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-utilities" (OuterVolumeSpecName: "utilities") pod "916680c2-db9a-4583-9bf8-97f4727b1bed" (UID: "916680c2-db9a-4583-9bf8-97f4727b1bed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.937093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916680c2-db9a-4583-9bf8-97f4727b1bed-kube-api-access-6k8xl" (OuterVolumeSpecName: "kube-api-access-6k8xl") pod "916680c2-db9a-4583-9bf8-97f4727b1bed" (UID: "916680c2-db9a-4583-9bf8-97f4727b1bed"). InnerVolumeSpecName "kube-api-access-6k8xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 04:59:45 crc kubenswrapper[4796]: I1212 04:59:45.985261 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "916680c2-db9a-4583-9bf8-97f4727b1bed" (UID: "916680c2-db9a-4583-9bf8-97f4727b1bed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.034436 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.034472 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8xl\" (UniqueName: \"kubernetes.io/projected/916680c2-db9a-4583-9bf8-97f4727b1bed-kube-api-access-6k8xl\") on node \"crc\" DevicePath \"\"" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.034485 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916680c2-db9a-4583-9bf8-97f4727b1bed-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.392363 4796 generic.go:334] "Generic (PLEG): container finished" podID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerID="67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859" exitCode=0 Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.392403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerDied","Data":"67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859"} Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.392433 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5qh2" event={"ID":"916680c2-db9a-4583-9bf8-97f4727b1bed","Type":"ContainerDied","Data":"85ed64e3125b1dfc3f3e13e0400598873fcec927d74577bd44a7358b0466deda"} Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.392449 4796 scope.go:117] "RemoveContainer" containerID="67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.392577 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5qh2" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.423545 4796 scope.go:117] "RemoveContainer" containerID="f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.434757 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5qh2"] Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.446997 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l5qh2"] Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.455491 4796 scope.go:117] "RemoveContainer" containerID="a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.514310 4796 scope.go:117] "RemoveContainer" containerID="67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859" Dec 12 04:59:46 crc kubenswrapper[4796]: E1212 04:59:46.514890 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859\": container with ID starting with 67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859 not found: ID does not exist" containerID="67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.515010 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859"} err="failed to get container status \"67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859\": rpc error: code = NotFound desc = could not find container \"67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859\": container with ID starting with 67de21d0bc9e7e940e962a2950246edb810ed6900d594194d48fd8caa84bc859 not found: ID does not exist" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.515104 4796 scope.go:117] "RemoveContainer" containerID="f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c" Dec 12 04:59:46 crc kubenswrapper[4796]: E1212 04:59:46.515860 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c\": container with ID starting with f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c not found: ID does not exist" containerID="f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.515927 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c"} err="failed to get container status \"f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c\": rpc error: code = NotFound desc = could not find container \"f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c\": container with ID starting with f7bea00190903ab44723885d1d4bfefd2ab1a3f532e1509769f20f5be093e28c not found: ID does not exist" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.515982 4796 scope.go:117] "RemoveContainer" containerID="a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93" Dec 12 04:59:46 crc kubenswrapper[4796]: E1212 04:59:46.516437 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93\": container with ID starting with a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93 not found: ID does not exist" containerID="a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93" Dec 12 04:59:46 crc kubenswrapper[4796]: I1212 04:59:46.516473 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93"} err="failed to get container status \"a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93\": rpc error: code = NotFound desc = could not find container \"a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93\": container with ID starting with a0c2bc8e531b377aba8334e1b6f07e9309d3c39858e2d699693d047bb3a96c93 not found: ID does not exist" Dec 12 04:59:47 crc kubenswrapper[4796]: I1212 04:59:47.423247 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" path="/var/lib/kubelet/pods/916680c2-db9a-4583-9bf8-97f4727b1bed/volumes" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.008899 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jj2g2"] Dec 12 04:59:49 crc kubenswrapper[4796]: E1212 04:59:49.024825 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="extract-content" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.024893 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="extract-content" Dec 12 04:59:49 crc kubenswrapper[4796]: E1212 04:59:49.024985 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="extract-utilities" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.025002 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="extract-utilities" Dec 12 04:59:49 crc kubenswrapper[4796]: E1212 04:59:49.025017 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="registry-server" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.025059 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="registry-server" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.026514 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="916680c2-db9a-4583-9bf8-97f4727b1bed" containerName="registry-server" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.045321 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jj2g2"] Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.045472 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.193374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-utilities\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.193615 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf94c\" (UniqueName: \"kubernetes.io/projected/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-kube-api-access-kf94c\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.193680 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-catalog-content\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.296176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-utilities\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.296249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf94c\" (UniqueName: \"kubernetes.io/projected/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-kube-api-access-kf94c\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.296271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-catalog-content\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.296773 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-catalog-content\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.296900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-utilities\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.318839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf94c\" (UniqueName: \"kubernetes.io/projected/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-kube-api-access-kf94c\") pod \"certified-operators-jj2g2\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.375708 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:49 crc kubenswrapper[4796]: I1212 04:59:49.834447 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jj2g2"] Dec 12 04:59:50 crc kubenswrapper[4796]: I1212 04:59:50.431133 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerID="7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be" exitCode=0 Dec 12 04:59:50 crc kubenswrapper[4796]: I1212 04:59:50.431425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerDied","Data":"7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be"} Dec 12 04:59:50 crc kubenswrapper[4796]: I1212 04:59:50.431454 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerStarted","Data":"aa5931b8c854517f1486425f2fb4d40aa15e18d652b3703a1eba1b02cdc9849e"} Dec 12 04:59:51 crc kubenswrapper[4796]: I1212 04:59:51.442535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerStarted","Data":"9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d"} Dec 12 04:59:53 crc kubenswrapper[4796]: I1212 04:59:53.472624 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerID="9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d" exitCode=0 Dec 12 04:59:53 crc kubenswrapper[4796]: I1212 04:59:53.472834 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerDied","Data":"9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d"} Dec 12 04:59:54 crc kubenswrapper[4796]: I1212 04:59:54.483496 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerStarted","Data":"1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674"} Dec 12 04:59:54 crc kubenswrapper[4796]: I1212 04:59:54.502762 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jj2g2" podStartSLOduration=2.792966973 podStartE2EDuration="6.502739673s" podCreationTimestamp="2025-12-12 04:59:48 +0000 UTC" firstStartedPulling="2025-12-12 04:59:50.442077603 +0000 UTC m=+1581.318094750" lastFinishedPulling="2025-12-12 04:59:54.151850263 +0000 UTC m=+1585.027867450" observedRunningTime="2025-12-12 04:59:54.498444229 +0000 UTC m=+1585.374461376" watchObservedRunningTime="2025-12-12 04:59:54.502739673 +0000 UTC m=+1585.378756830" Dec 12 04:59:59 crc kubenswrapper[4796]: I1212 04:59:59.376481 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:59 crc kubenswrapper[4796]: I1212 04:59:59.377060 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:59 crc kubenswrapper[4796]: I1212 04:59:59.428832 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:59 crc kubenswrapper[4796]: I1212 04:59:59.601353 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 04:59:59 crc kubenswrapper[4796]: I1212 04:59:59.671646 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jj2g2"] Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.153929 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs"] Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.155549 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.163683 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.165580 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.169678 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs"] Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.339632 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhpl\" (UniqueName: \"kubernetes.io/projected/a89e4806-1860-412b-a6e6-358cc04c1bce-kube-api-access-6zhpl\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.339767 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a89e4806-1860-412b-a6e6-358cc04c1bce-secret-volume\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.339855 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a89e4806-1860-412b-a6e6-358cc04c1bce-config-volume\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.441121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhpl\" (UniqueName: \"kubernetes.io/projected/a89e4806-1860-412b-a6e6-358cc04c1bce-kube-api-access-6zhpl\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.441215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a89e4806-1860-412b-a6e6-358cc04c1bce-secret-volume\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.441359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a89e4806-1860-412b-a6e6-358cc04c1bce-config-volume\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.442213 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a89e4806-1860-412b-a6e6-358cc04c1bce-config-volume\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.447958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a89e4806-1860-412b-a6e6-358cc04c1bce-secret-volume\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.462700 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhpl\" (UniqueName: \"kubernetes.io/projected/a89e4806-1860-412b-a6e6-358cc04c1bce-kube-api-access-6zhpl\") pod \"collect-profiles-29425260-tlrvs\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.482533 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.824562 4796 scope.go:117] "RemoveContainer" containerID="741fe1ca77a568f4afe310310736db014c37377d494eed0ccb414e01fe71b5f8" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.851553 4796 scope.go:117] "RemoveContainer" containerID="ef32e79269831a71285933d9c85cdf8359cd004801eabddbb14c41797e90f0be" Dec 12 05:00:00 crc kubenswrapper[4796]: I1212 05:00:00.966524 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs"] Dec 12 05:00:01 crc kubenswrapper[4796]: I1212 05:00:01.556871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" event={"ID":"a89e4806-1860-412b-a6e6-358cc04c1bce","Type":"ContainerStarted","Data":"d5010b54026484c2fd99d98da272d69020363a12abe53537bff8a85f55283cc2"} Dec 12 05:00:01 crc kubenswrapper[4796]: I1212 05:00:01.558136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" event={"ID":"a89e4806-1860-412b-a6e6-358cc04c1bce","Type":"ContainerStarted","Data":"b8b21902100e3a9d36def8f91518e7b94ffd2299c834fef26b106d1d3831af01"} Dec 12 05:00:01 crc kubenswrapper[4796]: I1212 05:00:01.557156 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jj2g2" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="registry-server" containerID="cri-o://1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674" gracePeriod=2 Dec 12 05:00:01 crc kubenswrapper[4796]: I1212 05:00:01.583423 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" podStartSLOduration=1.583406729 podStartE2EDuration="1.583406729s" podCreationTimestamp="2025-12-12 05:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:00:01.573990044 +0000 UTC m=+1592.450007191" watchObservedRunningTime="2025-12-12 05:00:01.583406729 +0000 UTC m=+1592.459423866" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.031658 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.186199 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-catalog-content\") pod \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.186382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf94c\" (UniqueName: \"kubernetes.io/projected/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-kube-api-access-kf94c\") pod \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.186493 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-utilities\") pod \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\" (UID: \"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9\") " Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.187221 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-utilities" (OuterVolumeSpecName: "utilities") pod "3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" (UID: "3e06c5d6-1846-4657-bbd8-ebaa8579d5c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.197545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-kube-api-access-kf94c" (OuterVolumeSpecName: "kube-api-access-kf94c") pod "3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" (UID: "3e06c5d6-1846-4657-bbd8-ebaa8579d5c9"). InnerVolumeSpecName "kube-api-access-kf94c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.271233 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" (UID: "3e06c5d6-1846-4657-bbd8-ebaa8579d5c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.288845 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf94c\" (UniqueName: \"kubernetes.io/projected/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-kube-api-access-kf94c\") on node \"crc\" DevicePath \"\"" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.288888 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.288899 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.572913 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerID="1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674" exitCode=0 Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.573024 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj2g2" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.573063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerDied","Data":"1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674"} Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.573106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj2g2" event={"ID":"3e06c5d6-1846-4657-bbd8-ebaa8579d5c9","Type":"ContainerDied","Data":"aa5931b8c854517f1486425f2fb4d40aa15e18d652b3703a1eba1b02cdc9849e"} Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.573139 4796 scope.go:117] "RemoveContainer" containerID="1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.575444 4796 generic.go:334] "Generic (PLEG): container finished" podID="a89e4806-1860-412b-a6e6-358cc04c1bce" containerID="d5010b54026484c2fd99d98da272d69020363a12abe53537bff8a85f55283cc2" exitCode=0 Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.575484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" event={"ID":"a89e4806-1860-412b-a6e6-358cc04c1bce","Type":"ContainerDied","Data":"d5010b54026484c2fd99d98da272d69020363a12abe53537bff8a85f55283cc2"} Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.610400 4796 scope.go:117] "RemoveContainer" containerID="9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.629042 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jj2g2"] Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.637892 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jj2g2"] Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.640090 4796 scope.go:117] "RemoveContainer" containerID="7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.682545 4796 scope.go:117] "RemoveContainer" containerID="1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674" Dec 12 05:00:02 crc kubenswrapper[4796]: E1212 05:00:02.683270 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674\": container with ID starting with 1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674 not found: ID does not exist" containerID="1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.683321 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674"} err="failed to get container status \"1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674\": rpc error: code = NotFound desc = could not find container \"1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674\": container with ID starting with 1464eedcec509e9abbf7f0dc08e436226bf5713f0845386d81276b53613bb674 not found: ID does not exist" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.683357 4796 scope.go:117] "RemoveContainer" containerID="9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d" Dec 12 05:00:02 crc kubenswrapper[4796]: E1212 05:00:02.683633 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d\": container with ID starting with 9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d not found: ID does not exist" containerID="9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.683666 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d"} err="failed to get container status \"9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d\": rpc error: code = NotFound desc = could not find container \"9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d\": container with ID starting with 9e4e6c4ebbf41e5b5b1577d9f3c41142fcce63d3b7cc09688b2cc5101ae5853d not found: ID does not exist" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.683689 4796 scope.go:117] "RemoveContainer" containerID="7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be" Dec 12 05:00:02 crc kubenswrapper[4796]: E1212 05:00:02.684079 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be\": container with ID starting with 7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be not found: ID does not exist" containerID="7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be" Dec 12 05:00:02 crc kubenswrapper[4796]: I1212 05:00:02.684101 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be"} err="failed to get container status \"7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be\": rpc error: code = NotFound desc = could not find container \"7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be\": container with ID starting with 7ce707648d3c238bf80eee6b780dcec415b9fc7840ee2fc6392b0db44d1839be not found: ID does not exist" Dec 12 05:00:03 crc kubenswrapper[4796]: I1212 05:00:03.426113 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" path="/var/lib/kubelet/pods/3e06c5d6-1846-4657-bbd8-ebaa8579d5c9/volumes" Dec 12 05:00:03 crc kubenswrapper[4796]: I1212 05:00:03.942968 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.123380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zhpl\" (UniqueName: \"kubernetes.io/projected/a89e4806-1860-412b-a6e6-358cc04c1bce-kube-api-access-6zhpl\") pod \"a89e4806-1860-412b-a6e6-358cc04c1bce\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.123537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a89e4806-1860-412b-a6e6-358cc04c1bce-secret-volume\") pod \"a89e4806-1860-412b-a6e6-358cc04c1bce\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.123624 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a89e4806-1860-412b-a6e6-358cc04c1bce-config-volume\") pod \"a89e4806-1860-412b-a6e6-358cc04c1bce\" (UID: \"a89e4806-1860-412b-a6e6-358cc04c1bce\") " Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.124729 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89e4806-1860-412b-a6e6-358cc04c1bce-config-volume" (OuterVolumeSpecName: "config-volume") pod "a89e4806-1860-412b-a6e6-358cc04c1bce" (UID: "a89e4806-1860-412b-a6e6-358cc04c1bce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.128544 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89e4806-1860-412b-a6e6-358cc04c1bce-kube-api-access-6zhpl" (OuterVolumeSpecName: "kube-api-access-6zhpl") pod "a89e4806-1860-412b-a6e6-358cc04c1bce" (UID: "a89e4806-1860-412b-a6e6-358cc04c1bce"). InnerVolumeSpecName "kube-api-access-6zhpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.129234 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89e4806-1860-412b-a6e6-358cc04c1bce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a89e4806-1860-412b-a6e6-358cc04c1bce" (UID: "a89e4806-1860-412b-a6e6-358cc04c1bce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.226578 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a89e4806-1860-412b-a6e6-358cc04c1bce-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.226622 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zhpl\" (UniqueName: \"kubernetes.io/projected/a89e4806-1860-412b-a6e6-358cc04c1bce-kube-api-access-6zhpl\") on node \"crc\" DevicePath \"\"" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.226638 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a89e4806-1860-412b-a6e6-358cc04c1bce-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.600150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" event={"ID":"a89e4806-1860-412b-a6e6-358cc04c1bce","Type":"ContainerDied","Data":"b8b21902100e3a9d36def8f91518e7b94ffd2299c834fef26b106d1d3831af01"} Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.600192 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs" Dec 12 05:00:04 crc kubenswrapper[4796]: I1212 05:00:04.600216 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8b21902100e3a9d36def8f91518e7b94ffd2299c834fef26b106d1d3831af01" Dec 12 05:00:32 crc kubenswrapper[4796]: I1212 05:00:32.969294 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:00:32 crc kubenswrapper[4796]: I1212 05:00:32.969797 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.155607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29425261-srdtz"] Dec 12 05:01:00 crc kubenswrapper[4796]: E1212 05:01:00.156551 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="extract-content" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.156566 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="extract-content" Dec 12 05:01:00 crc kubenswrapper[4796]: E1212 05:01:00.156597 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89e4806-1860-412b-a6e6-358cc04c1bce" containerName="collect-profiles" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.156605 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89e4806-1860-412b-a6e6-358cc04c1bce" containerName="collect-profiles" Dec 12 05:01:00 crc kubenswrapper[4796]: E1212 05:01:00.156617 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="registry-server" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.156626 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="registry-server" Dec 12 05:01:00 crc kubenswrapper[4796]: E1212 05:01:00.156653 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="extract-utilities" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.156660 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="extract-utilities" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.156858 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89e4806-1860-412b-a6e6-358cc04c1bce" containerName="collect-profiles" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.156884 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e06c5d6-1846-4657-bbd8-ebaa8579d5c9" containerName="registry-server" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.157692 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.171332 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29425261-srdtz"] Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.330125 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9lbt\" (UniqueName: \"kubernetes.io/projected/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-kube-api-access-v9lbt\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.330614 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-fernet-keys\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.330677 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-combined-ca-bundle\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.330749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-config-data\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.431930 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-config-data\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.432002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9lbt\" (UniqueName: \"kubernetes.io/projected/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-kube-api-access-v9lbt\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.432081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-fernet-keys\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.432123 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-combined-ca-bundle\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.446139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-combined-ca-bundle\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.446467 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-config-data\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.447148 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-fernet-keys\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.455093 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9lbt\" (UniqueName: \"kubernetes.io/projected/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-kube-api-access-v9lbt\") pod \"keystone-cron-29425261-srdtz\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.479337 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:00 crc kubenswrapper[4796]: I1212 05:01:00.978762 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29425261-srdtz"] Dec 12 05:01:01 crc kubenswrapper[4796]: I1212 05:01:01.184259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425261-srdtz" event={"ID":"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9","Type":"ContainerStarted","Data":"1de3694026ec418598a4a154609073725916a8a8092117c911eb1420ee4d8ce2"} Dec 12 05:01:01 crc kubenswrapper[4796]: I1212 05:01:01.184632 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425261-srdtz" event={"ID":"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9","Type":"ContainerStarted","Data":"ee01dd7d33d5dd970cc7234c93050893e65d3b8469b3244c2f408ebbf79745b0"} Dec 12 05:01:01 crc kubenswrapper[4796]: I1212 05:01:01.206808 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29425261-srdtz" podStartSLOduration=1.2067699219999999 podStartE2EDuration="1.206769922s" podCreationTimestamp="2025-12-12 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:01:01.197703829 +0000 UTC m=+1652.073720996" watchObservedRunningTime="2025-12-12 05:01:01.206769922 +0000 UTC m=+1652.082787069" Dec 12 05:01:02 crc kubenswrapper[4796]: I1212 05:01:02.970051 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:01:02 crc kubenswrapper[4796]: I1212 05:01:02.970447 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:01:04 crc kubenswrapper[4796]: I1212 05:01:04.226090 4796 generic.go:334] "Generic (PLEG): container finished" podID="7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" containerID="1de3694026ec418598a4a154609073725916a8a8092117c911eb1420ee4d8ce2" exitCode=0 Dec 12 05:01:04 crc kubenswrapper[4796]: I1212 05:01:04.226168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425261-srdtz" event={"ID":"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9","Type":"ContainerDied","Data":"1de3694026ec418598a4a154609073725916a8a8092117c911eb1420ee4d8ce2"} Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.563123 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.740923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9lbt\" (UniqueName: \"kubernetes.io/projected/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-kube-api-access-v9lbt\") pod \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.741057 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-combined-ca-bundle\") pod \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.741094 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-config-data\") pod \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.741139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-fernet-keys\") pod \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\" (UID: \"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9\") " Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.747860 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" (UID: "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.756806 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-kube-api-access-v9lbt" (OuterVolumeSpecName: "kube-api-access-v9lbt") pod "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" (UID: "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9"). InnerVolumeSpecName "kube-api-access-v9lbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.773509 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" (UID: "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.801668 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-config-data" (OuterVolumeSpecName: "config-data") pod "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" (UID: "7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.843254 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9lbt\" (UniqueName: \"kubernetes.io/projected/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-kube-api-access-v9lbt\") on node \"crc\" DevicePath \"\"" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.843316 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.843331 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 05:01:05 crc kubenswrapper[4796]: I1212 05:01:05.843343 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 05:01:06 crc kubenswrapper[4796]: I1212 05:01:06.248205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425261-srdtz" event={"ID":"7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9","Type":"ContainerDied","Data":"ee01dd7d33d5dd970cc7234c93050893e65d3b8469b3244c2f408ebbf79745b0"} Dec 12 05:01:06 crc kubenswrapper[4796]: I1212 05:01:06.248476 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee01dd7d33d5dd970cc7234c93050893e65d3b8469b3244c2f408ebbf79745b0" Dec 12 05:01:06 crc kubenswrapper[4796]: I1212 05:01:06.248382 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425261-srdtz" Dec 12 05:01:32 crc kubenswrapper[4796]: I1212 05:01:32.969795 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:01:32 crc kubenswrapper[4796]: I1212 05:01:32.970318 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:01:32 crc kubenswrapper[4796]: I1212 05:01:32.970366 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:01:32 crc kubenswrapper[4796]: I1212 05:01:32.971006 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:01:32 crc kubenswrapper[4796]: I1212 05:01:32.971071 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" gracePeriod=600 Dec 12 05:01:33 crc kubenswrapper[4796]: E1212 05:01:33.109040 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:01:33 crc kubenswrapper[4796]: E1212 05:01:33.204586 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0403e92c_3d00_4092_a6d0_cdbc36b3ec1c.slice/crio-conmon-d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5.scope\": RecentStats: unable to find data in memory cache]" Dec 12 05:01:33 crc kubenswrapper[4796]: I1212 05:01:33.524156 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" exitCode=0 Dec 12 05:01:33 crc kubenswrapper[4796]: I1212 05:01:33.524452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5"} Dec 12 05:01:33 crc kubenswrapper[4796]: I1212 05:01:33.524634 4796 scope.go:117] "RemoveContainer" containerID="ffda408d796b66de9636479ae49cc06325aa5f1abbab5ccb1554a19b15d504a1" Dec 12 05:01:33 crc kubenswrapper[4796]: I1212 05:01:33.525408 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:01:33 crc kubenswrapper[4796]: E1212 05:01:33.525701 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:01:45 crc kubenswrapper[4796]: I1212 05:01:45.065621 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7855-account-create-update-4pqr9"] Dec 12 05:01:45 crc kubenswrapper[4796]: I1212 05:01:45.076467 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7855-account-create-update-4pqr9"] Dec 12 05:01:45 crc kubenswrapper[4796]: I1212 05:01:45.411126 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:01:45 crc kubenswrapper[4796]: E1212 05:01:45.411405 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:01:45 crc kubenswrapper[4796]: I1212 05:01:45.421650 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eeccb12-7501-4d76-b545-e0a667235668" path="/var/lib/kubelet/pods/8eeccb12-7501-4d76-b545-e0a667235668/volumes" Dec 12 05:01:46 crc kubenswrapper[4796]: I1212 05:01:46.044580 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rdhff"] Dec 12 05:01:46 crc kubenswrapper[4796]: I1212 05:01:46.056661 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bc07-account-create-update-h44pp"] Dec 12 05:01:46 crc kubenswrapper[4796]: I1212 05:01:46.071961 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9j8zv"] Dec 12 05:01:46 crc kubenswrapper[4796]: I1212 05:01:46.083580 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bc07-account-create-update-h44pp"] Dec 12 05:01:46 crc kubenswrapper[4796]: I1212 05:01:46.092665 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9j8zv"] Dec 12 05:01:46 crc kubenswrapper[4796]: I1212 05:01:46.101184 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rdhff"] Dec 12 05:01:47 crc kubenswrapper[4796]: I1212 05:01:47.425778 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f494e02-983f-4a3e-ada3-001cc79da003" path="/var/lib/kubelet/pods/1f494e02-983f-4a3e-ada3-001cc79da003/volumes" Dec 12 05:01:47 crc kubenswrapper[4796]: I1212 05:01:47.428521 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7366c77a-ccc0-44d6-85cf-e8048e5daedc" path="/var/lib/kubelet/pods/7366c77a-ccc0-44d6-85cf-e8048e5daedc/volumes" Dec 12 05:01:47 crc kubenswrapper[4796]: I1212 05:01:47.430221 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df906188-0fd2-43e8-8dca-16474c2ab546" path="/var/lib/kubelet/pods/df906188-0fd2-43e8-8dca-16474c2ab546/volumes" Dec 12 05:01:52 crc kubenswrapper[4796]: I1212 05:01:52.052650 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pqzxn"] Dec 12 05:01:52 crc kubenswrapper[4796]: I1212 05:01:52.063264 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bee9-account-create-update-cb5v4"] Dec 12 05:01:52 crc kubenswrapper[4796]: I1212 05:01:52.071707 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pqzxn"] Dec 12 05:01:52 crc kubenswrapper[4796]: I1212 05:01:52.080346 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bee9-account-create-update-cb5v4"] Dec 12 05:01:53 crc kubenswrapper[4796]: I1212 05:01:53.421519 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57685147-1da1-474a-ab9d-b33e05450527" path="/var/lib/kubelet/pods/57685147-1da1-474a-ab9d-b33e05450527/volumes" Dec 12 05:01:53 crc kubenswrapper[4796]: I1212 05:01:53.423386 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e0850-2c85-40bf-bb1a-03843b97b6ac" path="/var/lib/kubelet/pods/f85e0850-2c85-40bf-bb1a-03843b97b6ac/volumes" Dec 12 05:02:00 crc kubenswrapper[4796]: I1212 05:02:00.411798 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:02:00 crc kubenswrapper[4796]: E1212 05:02:00.412962 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:02:00 crc kubenswrapper[4796]: I1212 05:02:00.998351 4796 scope.go:117] "RemoveContainer" containerID="12778d9cfd36dfeda7807972e2f53f34edd8061dc3e8ad04b76d8bc5426c8e60" Dec 12 05:02:01 crc kubenswrapper[4796]: I1212 05:02:01.035227 4796 scope.go:117] "RemoveContainer" containerID="84e41ed23ab2b9e0c16f858febcbbc7d61252dfbfe0bef02f950eb37d9f0c034" Dec 12 05:02:01 crc kubenswrapper[4796]: I1212 05:02:01.077327 4796 scope.go:117] "RemoveContainer" containerID="744c84167243342353b0c775a602df649c82bec21d14427a85f9c01137155509" Dec 12 05:02:01 crc kubenswrapper[4796]: I1212 05:02:01.119931 4796 scope.go:117] "RemoveContainer" containerID="b71a8b954d3f45fde44743cffa4e43915bc88341bc01974f83dc4183f016fc5e" Dec 12 05:02:01 crc kubenswrapper[4796]: I1212 05:02:01.168946 4796 scope.go:117] "RemoveContainer" containerID="65301d5ecc69ad877b7a1553751e7cc537d6b7a6bfd49f9bb45595ef814fc021" Dec 12 05:02:01 crc kubenswrapper[4796]: I1212 05:02:01.216331 4796 scope.go:117] "RemoveContainer" containerID="27a95856ba385b01fca08f9337c67999c04567f3e1835f8c24770aadf31176bf" Dec 12 05:02:12 crc kubenswrapper[4796]: I1212 05:02:12.411716 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:02:12 crc kubenswrapper[4796]: E1212 05:02:12.412789 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:02:14 crc kubenswrapper[4796]: I1212 05:02:14.997113 4796 generic.go:334] "Generic (PLEG): container finished" podID="86779d4a-5602-4b32-8e50-cd72fac17e8a" containerID="4d2bc98530928465ce8728ec97e74cffcfdd25b882f787e5cca562966ca18008" exitCode=0 Dec 12 05:02:14 crc kubenswrapper[4796]: I1212 05:02:14.997183 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" event={"ID":"86779d4a-5602-4b32-8e50-cd72fac17e8a","Type":"ContainerDied","Data":"4d2bc98530928465ce8728ec97e74cffcfdd25b882f787e5cca562966ca18008"} Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.490490 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.541538 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-bootstrap-combined-ca-bundle\") pod \"86779d4a-5602-4b32-8e50-cd72fac17e8a\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.542409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdbzq\" (UniqueName: \"kubernetes.io/projected/86779d4a-5602-4b32-8e50-cd72fac17e8a-kube-api-access-hdbzq\") pod \"86779d4a-5602-4b32-8e50-cd72fac17e8a\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.542884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-ssh-key\") pod \"86779d4a-5602-4b32-8e50-cd72fac17e8a\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.547689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "86779d4a-5602-4b32-8e50-cd72fac17e8a" (UID: "86779d4a-5602-4b32-8e50-cd72fac17e8a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.555019 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86779d4a-5602-4b32-8e50-cd72fac17e8a-kube-api-access-hdbzq" (OuterVolumeSpecName: "kube-api-access-hdbzq") pod "86779d4a-5602-4b32-8e50-cd72fac17e8a" (UID: "86779d4a-5602-4b32-8e50-cd72fac17e8a"). InnerVolumeSpecName "kube-api-access-hdbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.570353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "86779d4a-5602-4b32-8e50-cd72fac17e8a" (UID: "86779d4a-5602-4b32-8e50-cd72fac17e8a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.645533 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-inventory\") pod \"86779d4a-5602-4b32-8e50-cd72fac17e8a\" (UID: \"86779d4a-5602-4b32-8e50-cd72fac17e8a\") " Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.646386 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.646510 4796 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.646599 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdbzq\" (UniqueName: \"kubernetes.io/projected/86779d4a-5602-4b32-8e50-cd72fac17e8a-kube-api-access-hdbzq\") on node \"crc\" DevicePath \"\"" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.684778 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-inventory" (OuterVolumeSpecName: "inventory") pod "86779d4a-5602-4b32-8e50-cd72fac17e8a" (UID: "86779d4a-5602-4b32-8e50-cd72fac17e8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:02:16 crc kubenswrapper[4796]: I1212 05:02:16.748184 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86779d4a-5602-4b32-8e50-cd72fac17e8a-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.022578 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" event={"ID":"86779d4a-5602-4b32-8e50-cd72fac17e8a","Type":"ContainerDied","Data":"d0aefa8fb67d396e1c0ffbb546b6812a4c919024bf7ad142e77086a526785b89"} Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.022845 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0aefa8fb67d396e1c0ffbb546b6812a4c919024bf7ad142e77086a526785b89" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.022635 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.122998 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts"] Dec 12 05:02:17 crc kubenswrapper[4796]: E1212 05:02:17.123416 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86779d4a-5602-4b32-8e50-cd72fac17e8a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.123435 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="86779d4a-5602-4b32-8e50-cd72fac17e8a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 12 05:02:17 crc kubenswrapper[4796]: E1212 05:02:17.123491 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" containerName="keystone-cron" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.123500 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" containerName="keystone-cron" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.123721 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9" containerName="keystone-cron" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.123742 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="86779d4a-5602-4b32-8e50-cd72fac17e8a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.124348 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.144345 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.145006 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.145024 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.145302 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.149227 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts"] Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.155994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4v45\" (UniqueName: \"kubernetes.io/projected/45182716-6fae-4d42-81e2-ccdea8bf145b-kube-api-access-s4v45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.156065 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.156106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.257761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4v45\" (UniqueName: \"kubernetes.io/projected/45182716-6fae-4d42-81e2-ccdea8bf145b-kube-api-access-s4v45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.257822 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.257852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.262395 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.262565 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.274579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4v45\" (UniqueName: \"kubernetes.io/projected/45182716-6fae-4d42-81e2-ccdea8bf145b-kube-api-access-s4v45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ckkts\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:17 crc kubenswrapper[4796]: I1212 05:02:17.441027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:02:18 crc kubenswrapper[4796]: I1212 05:02:18.016427 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts"] Dec 12 05:02:18 crc kubenswrapper[4796]: I1212 05:02:18.033155 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:02:18 crc kubenswrapper[4796]: I1212 05:02:18.041449 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" event={"ID":"45182716-6fae-4d42-81e2-ccdea8bf145b","Type":"ContainerStarted","Data":"7f75d6440e86d6d9cf601b7081cd4874b3dcaa34228fb19d01a7af7a070d3734"} Dec 12 05:02:19 crc kubenswrapper[4796]: I1212 05:02:19.051392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" event={"ID":"45182716-6fae-4d42-81e2-ccdea8bf145b","Type":"ContainerStarted","Data":"d030c8195921f64f903af52f9dd5546cbc804cb9668b30ccf101d7196da25023"} Dec 12 05:02:19 crc kubenswrapper[4796]: I1212 05:02:19.078591 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" podStartSLOduration=1.896209742 podStartE2EDuration="2.078576637s" podCreationTimestamp="2025-12-12 05:02:17 +0000 UTC" firstStartedPulling="2025-12-12 05:02:18.032861123 +0000 UTC m=+1728.908878270" lastFinishedPulling="2025-12-12 05:02:18.215228018 +0000 UTC m=+1729.091245165" observedRunningTime="2025-12-12 05:02:19.075866412 +0000 UTC m=+1729.951883559" watchObservedRunningTime="2025-12-12 05:02:19.078576637 +0000 UTC m=+1729.954593784" Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.059816 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fb2cw"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.071087 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-gw45v"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.084651 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4jrtb"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.097933 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-gw45v"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.106101 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fb2cw"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.115251 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zp246"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.124152 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4jrtb"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.133506 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zp246"] Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.423574 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476f745e-37e2-44ce-a24d-3326352757da" path="/var/lib/kubelet/pods/476f745e-37e2-44ce-a24d-3326352757da/volumes" Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.424687 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4803e636-bb92-4795-ad4a-76cbbb4e4edc" path="/var/lib/kubelet/pods/4803e636-bb92-4795-ad4a-76cbbb4e4edc/volumes" Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.426385 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fca11a6-5060-41a2-8294-405e1a9a7869" path="/var/lib/kubelet/pods/6fca11a6-5060-41a2-8294-405e1a9a7869/volumes" Dec 12 05:02:23 crc kubenswrapper[4796]: I1212 05:02:23.427781 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e668f1e-c987-43ad-b5de-06a419b8935d" path="/var/lib/kubelet/pods/7e668f1e-c987-43ad-b5de-06a419b8935d/volumes" Dec 12 05:02:24 crc kubenswrapper[4796]: I1212 05:02:24.411648 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:02:24 crc kubenswrapper[4796]: E1212 05:02:24.411983 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:02:30 crc kubenswrapper[4796]: I1212 05:02:30.031445 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dfca-account-create-update-h429p"] Dec 12 05:02:30 crc kubenswrapper[4796]: I1212 05:02:30.041637 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4fcf-account-create-update-h9nv4"] Dec 12 05:02:30 crc kubenswrapper[4796]: I1212 05:02:30.077345 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dfca-account-create-update-h429p"] Dec 12 05:02:30 crc kubenswrapper[4796]: I1212 05:02:30.082644 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4fcf-account-create-update-h9nv4"] Dec 12 05:02:30 crc kubenswrapper[4796]: I1212 05:02:30.092697 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e29a-account-create-update-dqt68"] Dec 12 05:02:30 crc kubenswrapper[4796]: I1212 05:02:30.100686 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e29a-account-create-update-dqt68"] Dec 12 05:02:31 crc kubenswrapper[4796]: I1212 05:02:31.423657 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869d643d-6e95-4c96-aa41-570474e69ff4" path="/var/lib/kubelet/pods/869d643d-6e95-4c96-aa41-570474e69ff4/volumes" Dec 12 05:02:31 crc kubenswrapper[4796]: I1212 05:02:31.425322 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da87676-800b-429b-8091-01d0756398ba" path="/var/lib/kubelet/pods/8da87676-800b-429b-8091-01d0756398ba/volumes" Dec 12 05:02:31 crc kubenswrapper[4796]: I1212 05:02:31.426962 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ca6dbe-d88e-416d-bff4-2944a012764f" path="/var/lib/kubelet/pods/a1ca6dbe-d88e-416d-bff4-2944a012764f/volumes" Dec 12 05:02:35 crc kubenswrapper[4796]: I1212 05:02:35.411537 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:02:35 crc kubenswrapper[4796]: E1212 05:02:35.413004 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:02:36 crc kubenswrapper[4796]: I1212 05:02:36.034761 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-b8llp"] Dec 12 05:02:36 crc kubenswrapper[4796]: I1212 05:02:36.051028 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-b8llp"] Dec 12 05:02:37 crc kubenswrapper[4796]: I1212 05:02:37.431933 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce82eef6-1a2a-45fa-8729-ea625e2863a9" path="/var/lib/kubelet/pods/ce82eef6-1a2a-45fa-8729-ea625e2863a9/volumes" Dec 12 05:02:50 crc kubenswrapper[4796]: I1212 05:02:50.411807 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:02:50 crc kubenswrapper[4796]: E1212 05:02:50.412618 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.389767 4796 scope.go:117] "RemoveContainer" containerID="ebda42774f565ed31224ce6f6ae9f99428c77701dc748722055c74037b29ff04" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.429900 4796 scope.go:117] "RemoveContainer" containerID="c28a591c50f93bc773a2c9900a86bce141ecbde4b93045b8e22b23bcf7a47a2d" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.469699 4796 scope.go:117] "RemoveContainer" containerID="61d7f86c420a473959122561b45d043ed087310800fdeb81dd713880b200c55c" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.511806 4796 scope.go:117] "RemoveContainer" containerID="9e7eb64db528e65506263db258cbd0969b9b18c7ffe9db938cf4fed2bc8d978d" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.556688 4796 scope.go:117] "RemoveContainer" containerID="fd7df020bedbf00d0fb075b70be2503ed8a0b42d9ca1a91faada12709cf11ba5" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.598757 4796 scope.go:117] "RemoveContainer" containerID="7f363eea3732302ee1f0688578b7e4503e9fbcd48a801e647e9bbb8c6e14e585" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.643422 4796 scope.go:117] "RemoveContainer" containerID="dad4cb511f37d8d6c9d7b540d91e770438db48c40160f1a267dbbacee9568ad3" Dec 12 05:03:01 crc kubenswrapper[4796]: I1212 05:03:01.661297 4796 scope.go:117] "RemoveContainer" containerID="929e3eeac75ad0f1095a83431fa9111093dcad47e8adbb8550717c3c70580f23" Dec 12 05:03:05 crc kubenswrapper[4796]: I1212 05:03:05.411105 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:03:05 crc kubenswrapper[4796]: E1212 05:03:05.411587 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:03:12 crc kubenswrapper[4796]: I1212 05:03:12.065129 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wwzfw"] Dec 12 05:03:12 crc kubenswrapper[4796]: I1212 05:03:12.078731 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wwzfw"] Dec 12 05:03:13 crc kubenswrapper[4796]: I1212 05:03:13.430598 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0b2dfd-d78a-45ba-aac1-fab7457e322c" path="/var/lib/kubelet/pods/9c0b2dfd-d78a-45ba-aac1-fab7457e322c/volumes" Dec 12 05:03:16 crc kubenswrapper[4796]: I1212 05:03:16.411814 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:03:16 crc kubenswrapper[4796]: E1212 05:03:16.412299 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:03:28 crc kubenswrapper[4796]: I1212 05:03:28.410876 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:03:28 crc kubenswrapper[4796]: E1212 05:03:28.411620 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:03:32 crc kubenswrapper[4796]: I1212 05:03:32.068149 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ft5nd"] Dec 12 05:03:32 crc kubenswrapper[4796]: I1212 05:03:32.090804 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s2zgk"] Dec 12 05:03:32 crc kubenswrapper[4796]: I1212 05:03:32.098757 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-s672f"] Dec 12 05:03:32 crc kubenswrapper[4796]: I1212 05:03:32.106298 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ft5nd"] Dec 12 05:03:32 crc kubenswrapper[4796]: I1212 05:03:32.114506 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s2zgk"] Dec 12 05:03:32 crc kubenswrapper[4796]: I1212 05:03:32.121545 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-s672f"] Dec 12 05:03:33 crc kubenswrapper[4796]: I1212 05:03:33.430678 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a294f9-a8f1-47e6-a551-8a47f1751c39" path="/var/lib/kubelet/pods/11a294f9-a8f1-47e6-a551-8a47f1751c39/volumes" Dec 12 05:03:33 crc kubenswrapper[4796]: I1212 05:03:33.432588 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32723a76-dbe0-493d-9a87-5c2f46912a71" path="/var/lib/kubelet/pods/32723a76-dbe0-493d-9a87-5c2f46912a71/volumes" Dec 12 05:03:33 crc kubenswrapper[4796]: I1212 05:03:33.435753 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9082485-1887-4b6d-8e1f-371825f61dfc" path="/var/lib/kubelet/pods/b9082485-1887-4b6d-8e1f-371825f61dfc/volumes" Dec 12 05:03:41 crc kubenswrapper[4796]: I1212 05:03:41.413223 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:03:41 crc kubenswrapper[4796]: E1212 05:03:41.414028 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:03:53 crc kubenswrapper[4796]: I1212 05:03:53.411755 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:03:53 crc kubenswrapper[4796]: E1212 05:03:53.413172 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:03:55 crc kubenswrapper[4796]: I1212 05:03:55.047650 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lr2pq"] Dec 12 05:03:55 crc kubenswrapper[4796]: I1212 05:03:55.059717 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lr2pq"] Dec 12 05:03:55 crc kubenswrapper[4796]: I1212 05:03:55.421357 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2af8481-6c64-4dc2-8028-b5a548dca4ff" path="/var/lib/kubelet/pods/f2af8481-6c64-4dc2-8028-b5a548dca4ff/volumes" Dec 12 05:04:01 crc kubenswrapper[4796]: I1212 05:04:01.796125 4796 scope.go:117] "RemoveContainer" containerID="d7ba977ca0a7c2b1a64bfdbd032965adad4d31f4380cfd90964d7ae6121e1a8e" Dec 12 05:04:01 crc kubenswrapper[4796]: I1212 05:04:01.832593 4796 scope.go:117] "RemoveContainer" containerID="8d21a072b5e63f30311044a42ffbfe3d326f135ba45e5866e09f3f4923daeb2e" Dec 12 05:04:01 crc kubenswrapper[4796]: I1212 05:04:01.888978 4796 scope.go:117] "RemoveContainer" containerID="c0ac27ddc674d4c589112df98817a51c81d670252bd4e28afe21a761f198c507" Dec 12 05:04:01 crc kubenswrapper[4796]: I1212 05:04:01.932680 4796 scope.go:117] "RemoveContainer" containerID="267dc094d0f17957dc3d7616911386fbd4df0d13a6a85376914267685b0644ee" Dec 12 05:04:01 crc kubenswrapper[4796]: I1212 05:04:01.980553 4796 scope.go:117] "RemoveContainer" containerID="46089d54e79ab5fabe27d181459baa63ef898a88d29bcf23b396d89ce5eedd2f" Dec 12 05:04:05 crc kubenswrapper[4796]: I1212 05:04:05.412068 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:04:05 crc kubenswrapper[4796]: E1212 05:04:05.412901 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:04:18 crc kubenswrapper[4796]: I1212 05:04:18.175168 4796 generic.go:334] "Generic (PLEG): container finished" podID="45182716-6fae-4d42-81e2-ccdea8bf145b" containerID="d030c8195921f64f903af52f9dd5546cbc804cb9668b30ccf101d7196da25023" exitCode=0 Dec 12 05:04:18 crc kubenswrapper[4796]: I1212 05:04:18.175333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" event={"ID":"45182716-6fae-4d42-81e2-ccdea8bf145b","Type":"ContainerDied","Data":"d030c8195921f64f903af52f9dd5546cbc804cb9668b30ccf101d7196da25023"} Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.414199 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:04:19 crc kubenswrapper[4796]: E1212 05:04:19.414930 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.561590 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.662568 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-ssh-key\") pod \"45182716-6fae-4d42-81e2-ccdea8bf145b\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.662679 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4v45\" (UniqueName: \"kubernetes.io/projected/45182716-6fae-4d42-81e2-ccdea8bf145b-kube-api-access-s4v45\") pod \"45182716-6fae-4d42-81e2-ccdea8bf145b\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.662762 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-inventory\") pod \"45182716-6fae-4d42-81e2-ccdea8bf145b\" (UID: \"45182716-6fae-4d42-81e2-ccdea8bf145b\") " Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.669354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45182716-6fae-4d42-81e2-ccdea8bf145b-kube-api-access-s4v45" (OuterVolumeSpecName: "kube-api-access-s4v45") pod "45182716-6fae-4d42-81e2-ccdea8bf145b" (UID: "45182716-6fae-4d42-81e2-ccdea8bf145b"). InnerVolumeSpecName "kube-api-access-s4v45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.697543 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "45182716-6fae-4d42-81e2-ccdea8bf145b" (UID: "45182716-6fae-4d42-81e2-ccdea8bf145b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.701890 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-inventory" (OuterVolumeSpecName: "inventory") pod "45182716-6fae-4d42-81e2-ccdea8bf145b" (UID: "45182716-6fae-4d42-81e2-ccdea8bf145b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.764546 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.764590 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45182716-6fae-4d42-81e2-ccdea8bf145b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:04:19 crc kubenswrapper[4796]: I1212 05:04:19.764604 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4v45\" (UniqueName: \"kubernetes.io/projected/45182716-6fae-4d42-81e2-ccdea8bf145b-kube-api-access-s4v45\") on node \"crc\" DevicePath \"\"" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.193020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" event={"ID":"45182716-6fae-4d42-81e2-ccdea8bf145b","Type":"ContainerDied","Data":"7f75d6440e86d6d9cf601b7081cd4874b3dcaa34228fb19d01a7af7a070d3734"} Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.193335 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f75d6440e86d6d9cf601b7081cd4874b3dcaa34228fb19d01a7af7a070d3734" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.193072 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ckkts" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.300056 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj"] Dec 12 05:04:20 crc kubenswrapper[4796]: E1212 05:04:20.300804 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45182716-6fae-4d42-81e2-ccdea8bf145b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.300903 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="45182716-6fae-4d42-81e2-ccdea8bf145b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.301228 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="45182716-6fae-4d42-81e2-ccdea8bf145b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.302122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.304900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.305880 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.306628 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.306728 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.309622 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj"] Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.373814 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.373845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.373894 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzv4\" (UniqueName: \"kubernetes.io/projected/906f3822-cad4-497a-a87e-d50a257f3b15-kube-api-access-rhzv4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.475855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.475970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.476037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhzv4\" (UniqueName: \"kubernetes.io/projected/906f3822-cad4-497a-a87e-d50a257f3b15-kube-api-access-rhzv4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.479654 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.493543 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.497491 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhzv4\" (UniqueName: \"kubernetes.io/projected/906f3822-cad4-497a-a87e-d50a257f3b15-kube-api-access-rhzv4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-22ksj\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:20 crc kubenswrapper[4796]: I1212 05:04:20.628815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:04:21 crc kubenswrapper[4796]: I1212 05:04:21.141732 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj"] Dec 12 05:04:21 crc kubenswrapper[4796]: W1212 05:04:21.145802 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906f3822_cad4_497a_a87e_d50a257f3b15.slice/crio-c1efed2f1ce75c491b187210f3558f58bc5afbb15c340f889b3cdad43f0fb80f WatchSource:0}: Error finding container c1efed2f1ce75c491b187210f3558f58bc5afbb15c340f889b3cdad43f0fb80f: Status 404 returned error can't find the container with id c1efed2f1ce75c491b187210f3558f58bc5afbb15c340f889b3cdad43f0fb80f Dec 12 05:04:21 crc kubenswrapper[4796]: I1212 05:04:21.202447 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" event={"ID":"906f3822-cad4-497a-a87e-d50a257f3b15","Type":"ContainerStarted","Data":"c1efed2f1ce75c491b187210f3558f58bc5afbb15c340f889b3cdad43f0fb80f"} Dec 12 05:04:22 crc kubenswrapper[4796]: I1212 05:04:22.211765 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" event={"ID":"906f3822-cad4-497a-a87e-d50a257f3b15","Type":"ContainerStarted","Data":"84d9fd4402ea9512afc0938c46aaf7a8682cb7bb042d36cac4a07dda1f7e0fb2"} Dec 12 05:04:22 crc kubenswrapper[4796]: I1212 05:04:22.234297 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" podStartSLOduration=2.061646844 podStartE2EDuration="2.234250174s" podCreationTimestamp="2025-12-12 05:04:20 +0000 UTC" firstStartedPulling="2025-12-12 05:04:21.148649833 +0000 UTC m=+1852.024666980" lastFinishedPulling="2025-12-12 05:04:21.321253163 +0000 UTC m=+1852.197270310" observedRunningTime="2025-12-12 05:04:22.229264039 +0000 UTC m=+1853.105281206" watchObservedRunningTime="2025-12-12 05:04:22.234250174 +0000 UTC m=+1853.110267331" Dec 12 05:04:34 crc kubenswrapper[4796]: I1212 05:04:34.411735 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:04:34 crc kubenswrapper[4796]: E1212 05:04:34.412712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.062213 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sjlzk"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.077191 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3c59-account-create-update-x8hfx"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.096363 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bpw4k"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.103912 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sjlzk"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.130310 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3c59-account-create-update-x8hfx"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.137240 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9ffa-account-create-update-sbpcq"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.147520 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nkhcz"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.156893 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bpw4k"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.165707 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9a93-account-create-update-6wjp2"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.174189 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9ffa-account-create-update-sbpcq"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.182150 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9a93-account-create-update-6wjp2"] Dec 12 05:04:42 crc kubenswrapper[4796]: I1212 05:04:42.190887 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nkhcz"] Dec 12 05:04:43 crc kubenswrapper[4796]: I1212 05:04:43.421510 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26991dea-36e1-4038-9a9f-112a834e81dd" path="/var/lib/kubelet/pods/26991dea-36e1-4038-9a9f-112a834e81dd/volumes" Dec 12 05:04:43 crc kubenswrapper[4796]: I1212 05:04:43.422438 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a4043e-72a9-434f-9a65-f64d0e6846df" path="/var/lib/kubelet/pods/58a4043e-72a9-434f-9a65-f64d0e6846df/volumes" Dec 12 05:04:43 crc kubenswrapper[4796]: I1212 05:04:43.423108 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8958fdf9-254c-4b91-aec6-b49c39d63856" path="/var/lib/kubelet/pods/8958fdf9-254c-4b91-aec6-b49c39d63856/volumes" Dec 12 05:04:43 crc kubenswrapper[4796]: I1212 05:04:43.423868 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5503fc1-7260-47f1-ac1c-97650aacd9d8" path="/var/lib/kubelet/pods/b5503fc1-7260-47f1-ac1c-97650aacd9d8/volumes" Dec 12 05:04:43 crc kubenswrapper[4796]: I1212 05:04:43.425469 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd1f195-1eb4-41ed-a548-4f3e7e0d5090" path="/var/lib/kubelet/pods/cbd1f195-1eb4-41ed-a548-4f3e7e0d5090/volumes" Dec 12 05:04:43 crc kubenswrapper[4796]: I1212 05:04:43.426103 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e" path="/var/lib/kubelet/pods/f9e8ceb9-8b2b-43be-8263-5d29d3ca2b9e/volumes" Dec 12 05:04:49 crc kubenswrapper[4796]: I1212 05:04:49.416530 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:04:49 crc kubenswrapper[4796]: E1212 05:04:49.417231 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:05:02 crc kubenswrapper[4796]: I1212 05:05:02.107826 4796 scope.go:117] "RemoveContainer" containerID="3d097becea6f0bee786581278d069985871d9bfdae627ef9fef1b839bfced863" Dec 12 05:05:02 crc kubenswrapper[4796]: I1212 05:05:02.142515 4796 scope.go:117] "RemoveContainer" containerID="184e71120c10bc2b9fdd3653138d949cc7fb487dd5925bc8228a0e4838ac6020" Dec 12 05:05:02 crc kubenswrapper[4796]: I1212 05:05:02.184472 4796 scope.go:117] "RemoveContainer" containerID="d5a4fd0b12bf466f4cdb65c8b519f0984a856710fff5f6dbbea0533939734875" Dec 12 05:05:02 crc kubenswrapper[4796]: I1212 05:05:02.228338 4796 scope.go:117] "RemoveContainer" containerID="a22573f52dc6d40f8511df4a5192727f699b594c99a1abc0d5714a29c94977e6" Dec 12 05:05:02 crc kubenswrapper[4796]: I1212 05:05:02.280615 4796 scope.go:117] "RemoveContainer" containerID="682e6fd791d4a8ebd8af706cdd2ef9bb6fc2c87efa167d13b7954e56de84dec3" Dec 12 05:05:02 crc kubenswrapper[4796]: I1212 05:05:02.334261 4796 scope.go:117] "RemoveContainer" containerID="37d90340a70c1697621ff16565f0bdcc984efa76ef1ed350bacfd488c885ece2" Dec 12 05:05:04 crc kubenswrapper[4796]: I1212 05:05:04.410842 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:05:04 crc kubenswrapper[4796]: E1212 05:05:04.411109 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:05:13 crc kubenswrapper[4796]: I1212 05:05:13.446151 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fwxjx"] Dec 12 05:05:13 crc kubenswrapper[4796]: I1212 05:05:13.463436 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fwxjx"] Dec 12 05:05:15 crc kubenswrapper[4796]: I1212 05:05:15.423148 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b790398-6b23-4d61-9ad2-a79b868ad057" path="/var/lib/kubelet/pods/2b790398-6b23-4d61-9ad2-a79b868ad057/volumes" Dec 12 05:05:19 crc kubenswrapper[4796]: I1212 05:05:19.419211 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:05:19 crc kubenswrapper[4796]: E1212 05:05:19.419855 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:05:33 crc kubenswrapper[4796]: I1212 05:05:33.411730 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:05:33 crc kubenswrapper[4796]: E1212 05:05:33.412465 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:05:38 crc kubenswrapper[4796]: I1212 05:05:38.894846 4796 generic.go:334] "Generic (PLEG): container finished" podID="906f3822-cad4-497a-a87e-d50a257f3b15" containerID="84d9fd4402ea9512afc0938c46aaf7a8682cb7bb042d36cac4a07dda1f7e0fb2" exitCode=0 Dec 12 05:05:38 crc kubenswrapper[4796]: I1212 05:05:38.894916 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" event={"ID":"906f3822-cad4-497a-a87e-d50a257f3b15","Type":"ContainerDied","Data":"84d9fd4402ea9512afc0938c46aaf7a8682cb7bb042d36cac4a07dda1f7e0fb2"} Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.314868 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.357551 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-ssh-key\") pod \"906f3822-cad4-497a-a87e-d50a257f3b15\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.357849 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-inventory\") pod \"906f3822-cad4-497a-a87e-d50a257f3b15\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.357875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhzv4\" (UniqueName: \"kubernetes.io/projected/906f3822-cad4-497a-a87e-d50a257f3b15-kube-api-access-rhzv4\") pod \"906f3822-cad4-497a-a87e-d50a257f3b15\" (UID: \"906f3822-cad4-497a-a87e-d50a257f3b15\") " Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.363637 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906f3822-cad4-497a-a87e-d50a257f3b15-kube-api-access-rhzv4" (OuterVolumeSpecName: "kube-api-access-rhzv4") pod "906f3822-cad4-497a-a87e-d50a257f3b15" (UID: "906f3822-cad4-497a-a87e-d50a257f3b15"). InnerVolumeSpecName "kube-api-access-rhzv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.388824 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "906f3822-cad4-497a-a87e-d50a257f3b15" (UID: "906f3822-cad4-497a-a87e-d50a257f3b15"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.389634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-inventory" (OuterVolumeSpecName: "inventory") pod "906f3822-cad4-497a-a87e-d50a257f3b15" (UID: "906f3822-cad4-497a-a87e-d50a257f3b15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.460818 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.461076 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/906f3822-cad4-497a-a87e-d50a257f3b15-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.461138 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhzv4\" (UniqueName: \"kubernetes.io/projected/906f3822-cad4-497a-a87e-d50a257f3b15-kube-api-access-rhzv4\") on node \"crc\" DevicePath \"\"" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.915921 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" event={"ID":"906f3822-cad4-497a-a87e-d50a257f3b15","Type":"ContainerDied","Data":"c1efed2f1ce75c491b187210f3558f58bc5afbb15c340f889b3cdad43f0fb80f"} Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.915991 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1efed2f1ce75c491b187210f3558f58bc5afbb15c340f889b3cdad43f0fb80f" Dec 12 05:05:40 crc kubenswrapper[4796]: I1212 05:05:40.915963 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-22ksj" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.000738 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg"] Dec 12 05:05:41 crc kubenswrapper[4796]: E1212 05:05:41.001257 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906f3822-cad4-497a-a87e-d50a257f3b15" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.001297 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="906f3822-cad4-497a-a87e-d50a257f3b15" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.001571 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="906f3822-cad4-497a-a87e-d50a257f3b15" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.002391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.005563 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.005940 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.006873 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.007749 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.010249 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg"] Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.070013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.070066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.070176 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74ftk\" (UniqueName: \"kubernetes.io/projected/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-kube-api-access-74ftk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.172252 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.173507 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.173750 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74ftk\" (UniqueName: \"kubernetes.io/projected/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-kube-api-access-74ftk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.177290 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.180558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.192557 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74ftk\" (UniqueName: \"kubernetes.io/projected/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-kube-api-access-74ftk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.319878 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.833017 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg"] Dec 12 05:05:41 crc kubenswrapper[4796]: I1212 05:05:41.925958 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" event={"ID":"165cb754-40d9-41ec-abd3-1f5fbaeeb13c","Type":"ContainerStarted","Data":"1c206bf7ab0b6678ca756812578e29b1c3ada06ead56590ad3990b765a524d00"} Dec 12 05:05:42 crc kubenswrapper[4796]: I1212 05:05:42.050688 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tg6gj"] Dec 12 05:05:42 crc kubenswrapper[4796]: I1212 05:05:42.061506 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tg6gj"] Dec 12 05:05:42 crc kubenswrapper[4796]: I1212 05:05:42.935490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" event={"ID":"165cb754-40d9-41ec-abd3-1f5fbaeeb13c","Type":"ContainerStarted","Data":"e327a92947a0fbbf30a91ac4b97f9c97d843867cb4f5916b6cf5ff0510226b3e"} Dec 12 05:05:42 crc kubenswrapper[4796]: I1212 05:05:42.962645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" podStartSLOduration=2.7746848010000003 podStartE2EDuration="2.962621811s" podCreationTimestamp="2025-12-12 05:05:40 +0000 UTC" firstStartedPulling="2025-12-12 05:05:41.844258211 +0000 UTC m=+1932.720275358" lastFinishedPulling="2025-12-12 05:05:42.032195211 +0000 UTC m=+1932.908212368" observedRunningTime="2025-12-12 05:05:42.954751025 +0000 UTC m=+1933.830768172" watchObservedRunningTime="2025-12-12 05:05:42.962621811 +0000 UTC m=+1933.838638968" Dec 12 05:05:43 crc kubenswrapper[4796]: I1212 05:05:43.426251 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3077f41-167d-414c-9af4-05a03c32ab03" path="/var/lib/kubelet/pods/b3077f41-167d-414c-9af4-05a03c32ab03/volumes" Dec 12 05:05:44 crc kubenswrapper[4796]: I1212 05:05:44.036331 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jt82w"] Dec 12 05:05:44 crc kubenswrapper[4796]: I1212 05:05:44.048052 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jt82w"] Dec 12 05:05:45 crc kubenswrapper[4796]: I1212 05:05:45.411954 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:05:45 crc kubenswrapper[4796]: E1212 05:05:45.412285 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:05:45 crc kubenswrapper[4796]: I1212 05:05:45.427473 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0197fef8-4748-4ef6-a3cd-b038975d8882" path="/var/lib/kubelet/pods/0197fef8-4748-4ef6-a3cd-b038975d8882/volumes" Dec 12 05:05:47 crc kubenswrapper[4796]: I1212 05:05:47.985657 4796 generic.go:334] "Generic (PLEG): container finished" podID="165cb754-40d9-41ec-abd3-1f5fbaeeb13c" containerID="e327a92947a0fbbf30a91ac4b97f9c97d843867cb4f5916b6cf5ff0510226b3e" exitCode=0 Dec 12 05:05:47 crc kubenswrapper[4796]: I1212 05:05:47.985749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" event={"ID":"165cb754-40d9-41ec-abd3-1f5fbaeeb13c","Type":"ContainerDied","Data":"e327a92947a0fbbf30a91ac4b97f9c97d843867cb4f5916b6cf5ff0510226b3e"} Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.412111 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.549219 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74ftk\" (UniqueName: \"kubernetes.io/projected/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-kube-api-access-74ftk\") pod \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.549566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-inventory\") pod \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.549922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-ssh-key\") pod \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\" (UID: \"165cb754-40d9-41ec-abd3-1f5fbaeeb13c\") " Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.558493 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-kube-api-access-74ftk" (OuterVolumeSpecName: "kube-api-access-74ftk") pod "165cb754-40d9-41ec-abd3-1f5fbaeeb13c" (UID: "165cb754-40d9-41ec-abd3-1f5fbaeeb13c"). InnerVolumeSpecName "kube-api-access-74ftk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.579477 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-inventory" (OuterVolumeSpecName: "inventory") pod "165cb754-40d9-41ec-abd3-1f5fbaeeb13c" (UID: "165cb754-40d9-41ec-abd3-1f5fbaeeb13c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.583212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "165cb754-40d9-41ec-abd3-1f5fbaeeb13c" (UID: "165cb754-40d9-41ec-abd3-1f5fbaeeb13c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.653050 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74ftk\" (UniqueName: \"kubernetes.io/projected/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-kube-api-access-74ftk\") on node \"crc\" DevicePath \"\"" Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.653081 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:05:49 crc kubenswrapper[4796]: I1212 05:05:49.653092 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/165cb754-40d9-41ec-abd3-1f5fbaeeb13c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.007647 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" event={"ID":"165cb754-40d9-41ec-abd3-1f5fbaeeb13c","Type":"ContainerDied","Data":"1c206bf7ab0b6678ca756812578e29b1c3ada06ead56590ad3990b765a524d00"} Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.007910 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c206bf7ab0b6678ca756812578e29b1c3ada06ead56590ad3990b765a524d00" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.007718 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.091180 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt"] Dec 12 05:05:50 crc kubenswrapper[4796]: E1212 05:05:50.091737 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165cb754-40d9-41ec-abd3-1f5fbaeeb13c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.091761 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="165cb754-40d9-41ec-abd3-1f5fbaeeb13c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.091985 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="165cb754-40d9-41ec-abd3-1f5fbaeeb13c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.092841 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.100790 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.100959 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.101187 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt"] Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.101211 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.101603 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.160731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.160795 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.161049 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkpl\" (UniqueName: \"kubernetes.io/projected/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-kube-api-access-6lkpl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.263203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.263270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.263380 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lkpl\" (UniqueName: \"kubernetes.io/projected/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-kube-api-access-6lkpl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.267869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.271639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.291456 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lkpl\" (UniqueName: \"kubernetes.io/projected/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-kube-api-access-6lkpl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4tkgt\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.414640 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:05:50 crc kubenswrapper[4796]: I1212 05:05:50.965448 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt"] Dec 12 05:05:51 crc kubenswrapper[4796]: I1212 05:05:51.016944 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" event={"ID":"b92a97d2-b9e1-4717-a79c-a085aaaed3b6","Type":"ContainerStarted","Data":"8bde14fac3a73184ff843de5e74112cdc1c26c698470d692c6a46336047bd96b"} Dec 12 05:05:52 crc kubenswrapper[4796]: I1212 05:05:52.049712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" event={"ID":"b92a97d2-b9e1-4717-a79c-a085aaaed3b6","Type":"ContainerStarted","Data":"c8fbf15999508e5b8afd9f702619387156db628daea7aa0a60b2bc12b76aca53"} Dec 12 05:05:52 crc kubenswrapper[4796]: I1212 05:05:52.073482 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" podStartSLOduration=1.868011991 podStartE2EDuration="2.073458828s" podCreationTimestamp="2025-12-12 05:05:50 +0000 UTC" firstStartedPulling="2025-12-12 05:05:50.985444377 +0000 UTC m=+1941.861461534" lastFinishedPulling="2025-12-12 05:05:51.190891224 +0000 UTC m=+1942.066908371" observedRunningTime="2025-12-12 05:05:52.067914614 +0000 UTC m=+1942.943931761" watchObservedRunningTime="2025-12-12 05:05:52.073458828 +0000 UTC m=+1942.949475975" Dec 12 05:06:00 crc kubenswrapper[4796]: I1212 05:06:00.411168 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:06:00 crc kubenswrapper[4796]: E1212 05:06:00.411957 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:06:02 crc kubenswrapper[4796]: I1212 05:06:02.466081 4796 scope.go:117] "RemoveContainer" containerID="905595ae0f665d543c39961899db65f640913641eab0ccf7cfb57037233c6261" Dec 12 05:06:02 crc kubenswrapper[4796]: I1212 05:06:02.497889 4796 scope.go:117] "RemoveContainer" containerID="17fbdfc55ec1b1878b7115898e6c9c04f2fddd2b6e52a7c1fe53c85102fc1cf1" Dec 12 05:06:02 crc kubenswrapper[4796]: I1212 05:06:02.567479 4796 scope.go:117] "RemoveContainer" containerID="5027cd938388feeef5402e43196b5e72000d5149d7a5ddd740a607f2331d72ca" Dec 12 05:06:12 crc kubenswrapper[4796]: I1212 05:06:12.430199 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:06:12 crc kubenswrapper[4796]: E1212 05:06:12.431241 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:06:25 crc kubenswrapper[4796]: I1212 05:06:25.075831 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hfnkc"] Dec 12 05:06:25 crc kubenswrapper[4796]: I1212 05:06:25.088468 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hfnkc"] Dec 12 05:06:25 crc kubenswrapper[4796]: I1212 05:06:25.422652 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b94b77-fe1e-4a8f-b334-8f232c6c3bf9" path="/var/lib/kubelet/pods/18b94b77-fe1e-4a8f-b334-8f232c6c3bf9/volumes" Dec 12 05:06:26 crc kubenswrapper[4796]: I1212 05:06:26.411190 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:06:26 crc kubenswrapper[4796]: E1212 05:06:26.411475 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:06:34 crc kubenswrapper[4796]: I1212 05:06:34.407466 4796 generic.go:334] "Generic (PLEG): container finished" podID="b92a97d2-b9e1-4717-a79c-a085aaaed3b6" containerID="c8fbf15999508e5b8afd9f702619387156db628daea7aa0a60b2bc12b76aca53" exitCode=0 Dec 12 05:06:34 crc kubenswrapper[4796]: I1212 05:06:34.407549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" event={"ID":"b92a97d2-b9e1-4717-a79c-a085aaaed3b6","Type":"ContainerDied","Data":"c8fbf15999508e5b8afd9f702619387156db628daea7aa0a60b2bc12b76aca53"} Dec 12 05:06:35 crc kubenswrapper[4796]: I1212 05:06:35.868027 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:06:35 crc kubenswrapper[4796]: I1212 05:06:35.959554 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-ssh-key\") pod \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " Dec 12 05:06:35 crc kubenswrapper[4796]: I1212 05:06:35.960537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-inventory\") pod \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " Dec 12 05:06:35 crc kubenswrapper[4796]: I1212 05:06:35.960702 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lkpl\" (UniqueName: \"kubernetes.io/projected/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-kube-api-access-6lkpl\") pod \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\" (UID: \"b92a97d2-b9e1-4717-a79c-a085aaaed3b6\") " Dec 12 05:06:35 crc kubenswrapper[4796]: I1212 05:06:35.966823 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-kube-api-access-6lkpl" (OuterVolumeSpecName: "kube-api-access-6lkpl") pod "b92a97d2-b9e1-4717-a79c-a085aaaed3b6" (UID: "b92a97d2-b9e1-4717-a79c-a085aaaed3b6"). InnerVolumeSpecName "kube-api-access-6lkpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.002605 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b92a97d2-b9e1-4717-a79c-a085aaaed3b6" (UID: "b92a97d2-b9e1-4717-a79c-a085aaaed3b6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.004570 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-inventory" (OuterVolumeSpecName: "inventory") pod "b92a97d2-b9e1-4717-a79c-a085aaaed3b6" (UID: "b92a97d2-b9e1-4717-a79c-a085aaaed3b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.063498 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lkpl\" (UniqueName: \"kubernetes.io/projected/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-kube-api-access-6lkpl\") on node \"crc\" DevicePath \"\"" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.063708 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.063818 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b92a97d2-b9e1-4717-a79c-a085aaaed3b6-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.445623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" event={"ID":"b92a97d2-b9e1-4717-a79c-a085aaaed3b6","Type":"ContainerDied","Data":"8bde14fac3a73184ff843de5e74112cdc1c26c698470d692c6a46336047bd96b"} Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.445939 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bde14fac3a73184ff843de5e74112cdc1c26c698470d692c6a46336047bd96b" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.445654 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4tkgt" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.528038 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr"] Dec 12 05:06:36 crc kubenswrapper[4796]: E1212 05:06:36.528457 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92a97d2-b9e1-4717-a79c-a085aaaed3b6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.528475 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92a97d2-b9e1-4717-a79c-a085aaaed3b6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.528668 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92a97d2-b9e1-4717-a79c-a085aaaed3b6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.529314 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.531213 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.532197 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.532290 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.532377 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.589302 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr"] Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.675424 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.675474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98mj\" (UniqueName: \"kubernetes.io/projected/33889558-2c62-4dcd-ba10-c98855839d1e-kube-api-access-v98mj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.675632 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.778290 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.778381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98mj\" (UniqueName: \"kubernetes.io/projected/33889558-2c62-4dcd-ba10-c98855839d1e-kube-api-access-v98mj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.778446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.782662 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.791091 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.796001 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98mj\" (UniqueName: \"kubernetes.io/projected/33889558-2c62-4dcd-ba10-c98855839d1e-kube-api-access-v98mj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6qncr\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:36 crc kubenswrapper[4796]: I1212 05:06:36.846256 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:06:37 crc kubenswrapper[4796]: I1212 05:06:37.441258 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr"] Dec 12 05:06:37 crc kubenswrapper[4796]: I1212 05:06:37.469446 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" event={"ID":"33889558-2c62-4dcd-ba10-c98855839d1e","Type":"ContainerStarted","Data":"772d3d0cdde2735ef223ddf2314b3d5b15ad3453c261c402bdd67cabfec29bb0"} Dec 12 05:06:38 crc kubenswrapper[4796]: I1212 05:06:38.483249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" event={"ID":"33889558-2c62-4dcd-ba10-c98855839d1e","Type":"ContainerStarted","Data":"d73eec5f00a5e59f0c2b658adf38f29feadf118e99c50c40a6a45223c49ad120"} Dec 12 05:06:38 crc kubenswrapper[4796]: I1212 05:06:38.497393 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" podStartSLOduration=2.31323245 podStartE2EDuration="2.497377312s" podCreationTimestamp="2025-12-12 05:06:36 +0000 UTC" firstStartedPulling="2025-12-12 05:06:37.456383091 +0000 UTC m=+1988.332400238" lastFinishedPulling="2025-12-12 05:06:37.640527953 +0000 UTC m=+1988.516545100" observedRunningTime="2025-12-12 05:06:38.496119502 +0000 UTC m=+1989.372136649" watchObservedRunningTime="2025-12-12 05:06:38.497377312 +0000 UTC m=+1989.373394459" Dec 12 05:06:40 crc kubenswrapper[4796]: I1212 05:06:40.412315 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:06:41 crc kubenswrapper[4796]: I1212 05:06:41.514396 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"f11ff95b77ce8e0e105735357ea3bd8d295da60b88a478bd5954d7b8f179d18d"} Dec 12 05:07:02 crc kubenswrapper[4796]: I1212 05:07:02.672155 4796 scope.go:117] "RemoveContainer" containerID="dbd8c635bfcc8f3006ee8a45f2acf52d3e73a5c4e5c91679b4ee13fa524f7df1" Dec 12 05:07:34 crc kubenswrapper[4796]: I1212 05:07:34.961065 4796 generic.go:334] "Generic (PLEG): container finished" podID="33889558-2c62-4dcd-ba10-c98855839d1e" containerID="d73eec5f00a5e59f0c2b658adf38f29feadf118e99c50c40a6a45223c49ad120" exitCode=0 Dec 12 05:07:34 crc kubenswrapper[4796]: I1212 05:07:34.961519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" event={"ID":"33889558-2c62-4dcd-ba10-c98855839d1e","Type":"ContainerDied","Data":"d73eec5f00a5e59f0c2b658adf38f29feadf118e99c50c40a6a45223c49ad120"} Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.362327 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.438266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98mj\" (UniqueName: \"kubernetes.io/projected/33889558-2c62-4dcd-ba10-c98855839d1e-kube-api-access-v98mj\") pod \"33889558-2c62-4dcd-ba10-c98855839d1e\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.438604 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-ssh-key\") pod \"33889558-2c62-4dcd-ba10-c98855839d1e\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.438636 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-inventory\") pod \"33889558-2c62-4dcd-ba10-c98855839d1e\" (UID: \"33889558-2c62-4dcd-ba10-c98855839d1e\") " Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.449715 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33889558-2c62-4dcd-ba10-c98855839d1e-kube-api-access-v98mj" (OuterVolumeSpecName: "kube-api-access-v98mj") pod "33889558-2c62-4dcd-ba10-c98855839d1e" (UID: "33889558-2c62-4dcd-ba10-c98855839d1e"). InnerVolumeSpecName "kube-api-access-v98mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.471813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "33889558-2c62-4dcd-ba10-c98855839d1e" (UID: "33889558-2c62-4dcd-ba10-c98855839d1e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.472902 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-inventory" (OuterVolumeSpecName: "inventory") pod "33889558-2c62-4dcd-ba10-c98855839d1e" (UID: "33889558-2c62-4dcd-ba10-c98855839d1e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.540804 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.540845 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33889558-2c62-4dcd-ba10-c98855839d1e-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.540857 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98mj\" (UniqueName: \"kubernetes.io/projected/33889558-2c62-4dcd-ba10-c98855839d1e-kube-api-access-v98mj\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.982915 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.984396 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6qncr" event={"ID":"33889558-2c62-4dcd-ba10-c98855839d1e","Type":"ContainerDied","Data":"772d3d0cdde2735ef223ddf2314b3d5b15ad3453c261c402bdd67cabfec29bb0"} Dec 12 05:07:36 crc kubenswrapper[4796]: I1212 05:07:36.984446 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772d3d0cdde2735ef223ddf2314b3d5b15ad3453c261c402bdd67cabfec29bb0" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.102618 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f95w9"] Dec 12 05:07:37 crc kubenswrapper[4796]: E1212 05:07:37.103086 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33889558-2c62-4dcd-ba10-c98855839d1e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.103110 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="33889558-2c62-4dcd-ba10-c98855839d1e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.103432 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="33889558-2c62-4dcd-ba10-c98855839d1e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.104174 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.107176 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.107610 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.107738 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.110197 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.123568 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f95w9"] Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.256457 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ml78\" (UniqueName: \"kubernetes.io/projected/d1178cb0-94fb-46a2-84b8-a67ed7e55856-kube-api-access-2ml78\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.256863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.256934 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.359075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.359176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.359327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ml78\" (UniqueName: \"kubernetes.io/projected/d1178cb0-94fb-46a2-84b8-a67ed7e55856-kube-api-access-2ml78\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.364057 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.364057 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.379233 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ml78\" (UniqueName: \"kubernetes.io/projected/d1178cb0-94fb-46a2-84b8-a67ed7e55856-kube-api-access-2ml78\") pod \"ssh-known-hosts-edpm-deployment-f95w9\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.429145 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.973303 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f95w9"] Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.978264 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:07:37 crc kubenswrapper[4796]: I1212 05:07:37.991374 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" event={"ID":"d1178cb0-94fb-46a2-84b8-a67ed7e55856","Type":"ContainerStarted","Data":"cb7ee853743736f67a32482602cca77f055238214f55a151cf57a1442e98f832"} Dec 12 05:07:39 crc kubenswrapper[4796]: I1212 05:07:39.002045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" event={"ID":"d1178cb0-94fb-46a2-84b8-a67ed7e55856","Type":"ContainerStarted","Data":"98c72363092fda36360b15459253549a335aa7ca5b45bf19996e65dd53a0ccf3"} Dec 12 05:07:39 crc kubenswrapper[4796]: I1212 05:07:39.019931 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" podStartSLOduration=1.815957517 podStartE2EDuration="2.019912358s" podCreationTimestamp="2025-12-12 05:07:37 +0000 UTC" firstStartedPulling="2025-12-12 05:07:37.977927266 +0000 UTC m=+2048.853944413" lastFinishedPulling="2025-12-12 05:07:38.181882107 +0000 UTC m=+2049.057899254" observedRunningTime="2025-12-12 05:07:39.019537806 +0000 UTC m=+2049.895554953" watchObservedRunningTime="2025-12-12 05:07:39.019912358 +0000 UTC m=+2049.895929505" Dec 12 05:07:46 crc kubenswrapper[4796]: I1212 05:07:46.061090 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1178cb0-94fb-46a2-84b8-a67ed7e55856" containerID="98c72363092fda36360b15459253549a335aa7ca5b45bf19996e65dd53a0ccf3" exitCode=0 Dec 12 05:07:46 crc kubenswrapper[4796]: I1212 05:07:46.061165 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" event={"ID":"d1178cb0-94fb-46a2-84b8-a67ed7e55856","Type":"ContainerDied","Data":"98c72363092fda36360b15459253549a335aa7ca5b45bf19996e65dd53a0ccf3"} Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.573936 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.653117 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-inventory-0\") pod \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.653248 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-ssh-key-openstack-edpm-ipam\") pod \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.653355 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ml78\" (UniqueName: \"kubernetes.io/projected/d1178cb0-94fb-46a2-84b8-a67ed7e55856-kube-api-access-2ml78\") pod \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\" (UID: \"d1178cb0-94fb-46a2-84b8-a67ed7e55856\") " Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.658630 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1178cb0-94fb-46a2-84b8-a67ed7e55856-kube-api-access-2ml78" (OuterVolumeSpecName: "kube-api-access-2ml78") pod "d1178cb0-94fb-46a2-84b8-a67ed7e55856" (UID: "d1178cb0-94fb-46a2-84b8-a67ed7e55856"). InnerVolumeSpecName "kube-api-access-2ml78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.682884 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d1178cb0-94fb-46a2-84b8-a67ed7e55856" (UID: "d1178cb0-94fb-46a2-84b8-a67ed7e55856"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.691133 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1178cb0-94fb-46a2-84b8-a67ed7e55856" (UID: "d1178cb0-94fb-46a2-84b8-a67ed7e55856"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.755560 4796 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.755740 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1178cb0-94fb-46a2-84b8-a67ed7e55856-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:47 crc kubenswrapper[4796]: I1212 05:07:47.755820 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ml78\" (UniqueName: \"kubernetes.io/projected/d1178cb0-94fb-46a2-84b8-a67ed7e55856-kube-api-access-2ml78\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.087728 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" event={"ID":"d1178cb0-94fb-46a2-84b8-a67ed7e55856","Type":"ContainerDied","Data":"cb7ee853743736f67a32482602cca77f055238214f55a151cf57a1442e98f832"} Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.087796 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f95w9" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.087802 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb7ee853743736f67a32482602cca77f055238214f55a151cf57a1442e98f832" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.227264 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q"] Dec 12 05:07:48 crc kubenswrapper[4796]: E1212 05:07:48.228325 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1178cb0-94fb-46a2-84b8-a67ed7e55856" containerName="ssh-known-hosts-edpm-deployment" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.228352 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1178cb0-94fb-46a2-84b8-a67ed7e55856" containerName="ssh-known-hosts-edpm-deployment" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.228903 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1178cb0-94fb-46a2-84b8-a67ed7e55856" containerName="ssh-known-hosts-edpm-deployment" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.230151 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.244943 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.245405 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.245552 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.264649 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.267910 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q"] Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.369306 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.369400 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.369481 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4qf\" (UniqueName: \"kubernetes.io/projected/925383e5-f552-447e-a749-b0337865ce48-kube-api-access-mm4qf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.470786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.470862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.470934 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm4qf\" (UniqueName: \"kubernetes.io/projected/925383e5-f552-447e-a749-b0337865ce48-kube-api-access-mm4qf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.475044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.475633 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.496829 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm4qf\" (UniqueName: \"kubernetes.io/projected/925383e5-f552-447e-a749-b0337865ce48-kube-api-access-mm4qf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4m89q\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:48 crc kubenswrapper[4796]: I1212 05:07:48.574041 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:49 crc kubenswrapper[4796]: I1212 05:07:49.093326 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q"] Dec 12 05:07:50 crc kubenswrapper[4796]: I1212 05:07:50.105568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" event={"ID":"925383e5-f552-447e-a749-b0337865ce48","Type":"ContainerStarted","Data":"81379ed347063f63ed2564525b19b23bf43f887163278028ed07955ccb8d6204"} Dec 12 05:07:50 crc kubenswrapper[4796]: I1212 05:07:50.105877 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" event={"ID":"925383e5-f552-447e-a749-b0337865ce48","Type":"ContainerStarted","Data":"256d275e824c7951140e8d11be36455d379c57505e29d6c3bc6fd448bd678684"} Dec 12 05:07:50 crc kubenswrapper[4796]: I1212 05:07:50.147141 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" podStartSLOduration=1.96363423 podStartE2EDuration="2.147115652s" podCreationTimestamp="2025-12-12 05:07:48 +0000 UTC" firstStartedPulling="2025-12-12 05:07:49.106169683 +0000 UTC m=+2059.982186830" lastFinishedPulling="2025-12-12 05:07:49.289651105 +0000 UTC m=+2060.165668252" observedRunningTime="2025-12-12 05:07:50.134233999 +0000 UTC m=+2061.010251166" watchObservedRunningTime="2025-12-12 05:07:50.147115652 +0000 UTC m=+2061.023132809" Dec 12 05:07:58 crc kubenswrapper[4796]: I1212 05:07:58.168382 4796 generic.go:334] "Generic (PLEG): container finished" podID="925383e5-f552-447e-a749-b0337865ce48" containerID="81379ed347063f63ed2564525b19b23bf43f887163278028ed07955ccb8d6204" exitCode=0 Dec 12 05:07:58 crc kubenswrapper[4796]: I1212 05:07:58.168516 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" event={"ID":"925383e5-f552-447e-a749-b0337865ce48","Type":"ContainerDied","Data":"81379ed347063f63ed2564525b19b23bf43f887163278028ed07955ccb8d6204"} Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.602986 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.681201 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm4qf\" (UniqueName: \"kubernetes.io/projected/925383e5-f552-447e-a749-b0337865ce48-kube-api-access-mm4qf\") pod \"925383e5-f552-447e-a749-b0337865ce48\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.681324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-inventory\") pod \"925383e5-f552-447e-a749-b0337865ce48\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.681420 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-ssh-key\") pod \"925383e5-f552-447e-a749-b0337865ce48\" (UID: \"925383e5-f552-447e-a749-b0337865ce48\") " Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.687365 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925383e5-f552-447e-a749-b0337865ce48-kube-api-access-mm4qf" (OuterVolumeSpecName: "kube-api-access-mm4qf") pod "925383e5-f552-447e-a749-b0337865ce48" (UID: "925383e5-f552-447e-a749-b0337865ce48"). InnerVolumeSpecName "kube-api-access-mm4qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.716804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "925383e5-f552-447e-a749-b0337865ce48" (UID: "925383e5-f552-447e-a749-b0337865ce48"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.727616 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-inventory" (OuterVolumeSpecName: "inventory") pod "925383e5-f552-447e-a749-b0337865ce48" (UID: "925383e5-f552-447e-a749-b0337865ce48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.783530 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm4qf\" (UniqueName: \"kubernetes.io/projected/925383e5-f552-447e-a749-b0337865ce48-kube-api-access-mm4qf\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.783566 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:07:59 crc kubenswrapper[4796]: I1212 05:07:59.783579 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/925383e5-f552-447e-a749-b0337865ce48-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.187589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" event={"ID":"925383e5-f552-447e-a749-b0337865ce48","Type":"ContainerDied","Data":"256d275e824c7951140e8d11be36455d379c57505e29d6c3bc6fd448bd678684"} Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.187976 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="256d275e824c7951140e8d11be36455d379c57505e29d6c3bc6fd448bd678684" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.187628 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4m89q" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.279674 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l"] Dec 12 05:08:00 crc kubenswrapper[4796]: E1212 05:08:00.280075 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925383e5-f552-447e-a749-b0337865ce48" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.280090 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="925383e5-f552-447e-a749-b0337865ce48" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.280320 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="925383e5-f552-447e-a749-b0337865ce48" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.280949 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.282904 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.283081 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.284649 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.287923 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.306341 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l"] Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.395575 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.395777 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.396075 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96mjt\" (UniqueName: \"kubernetes.io/projected/e3c26ddb-8907-4b44-bc42-86138dc25d8b-kube-api-access-96mjt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.497334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.497565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.497761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96mjt\" (UniqueName: \"kubernetes.io/projected/e3c26ddb-8907-4b44-bc42-86138dc25d8b-kube-api-access-96mjt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.501605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.503579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.515017 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96mjt\" (UniqueName: \"kubernetes.io/projected/e3c26ddb-8907-4b44-bc42-86138dc25d8b-kube-api-access-96mjt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:00 crc kubenswrapper[4796]: I1212 05:08:00.597672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:01 crc kubenswrapper[4796]: I1212 05:08:01.117254 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l"] Dec 12 05:08:01 crc kubenswrapper[4796]: I1212 05:08:01.196048 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" event={"ID":"e3c26ddb-8907-4b44-bc42-86138dc25d8b","Type":"ContainerStarted","Data":"d320cd7b92772b34ed6da72f172c8469fe09e79bf532efaa8aac88aa8ecd0cbc"} Dec 12 05:08:02 crc kubenswrapper[4796]: I1212 05:08:02.207396 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" event={"ID":"e3c26ddb-8907-4b44-bc42-86138dc25d8b","Type":"ContainerStarted","Data":"910514e5f8f2c02fc19630a6e1f821884fe1a8342fb50f76c8547f8f860f7bd0"} Dec 12 05:08:12 crc kubenswrapper[4796]: I1212 05:08:12.296953 4796 generic.go:334] "Generic (PLEG): container finished" podID="e3c26ddb-8907-4b44-bc42-86138dc25d8b" containerID="910514e5f8f2c02fc19630a6e1f821884fe1a8342fb50f76c8547f8f860f7bd0" exitCode=0 Dec 12 05:08:12 crc kubenswrapper[4796]: I1212 05:08:12.298387 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" event={"ID":"e3c26ddb-8907-4b44-bc42-86138dc25d8b","Type":"ContainerDied","Data":"910514e5f8f2c02fc19630a6e1f821884fe1a8342fb50f76c8547f8f860f7bd0"} Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.732328 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.867839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-ssh-key\") pod \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.867959 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96mjt\" (UniqueName: \"kubernetes.io/projected/e3c26ddb-8907-4b44-bc42-86138dc25d8b-kube-api-access-96mjt\") pod \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.868136 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-inventory\") pod \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\" (UID: \"e3c26ddb-8907-4b44-bc42-86138dc25d8b\") " Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.874454 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c26ddb-8907-4b44-bc42-86138dc25d8b-kube-api-access-96mjt" (OuterVolumeSpecName: "kube-api-access-96mjt") pod "e3c26ddb-8907-4b44-bc42-86138dc25d8b" (UID: "e3c26ddb-8907-4b44-bc42-86138dc25d8b"). InnerVolumeSpecName "kube-api-access-96mjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.897545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3c26ddb-8907-4b44-bc42-86138dc25d8b" (UID: "e3c26ddb-8907-4b44-bc42-86138dc25d8b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.913163 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-inventory" (OuterVolumeSpecName: "inventory") pod "e3c26ddb-8907-4b44-bc42-86138dc25d8b" (UID: "e3c26ddb-8907-4b44-bc42-86138dc25d8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.970484 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96mjt\" (UniqueName: \"kubernetes.io/projected/e3c26ddb-8907-4b44-bc42-86138dc25d8b-kube-api-access-96mjt\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.970543 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:13 crc kubenswrapper[4796]: I1212 05:08:13.970557 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3c26ddb-8907-4b44-bc42-86138dc25d8b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.317790 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" event={"ID":"e3c26ddb-8907-4b44-bc42-86138dc25d8b","Type":"ContainerDied","Data":"d320cd7b92772b34ed6da72f172c8469fe09e79bf532efaa8aac88aa8ecd0cbc"} Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.317840 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.317850 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d320cd7b92772b34ed6da72f172c8469fe09e79bf532efaa8aac88aa8ecd0cbc" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.421022 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6"] Dec 12 05:08:14 crc kubenswrapper[4796]: E1212 05:08:14.423449 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c26ddb-8907-4b44-bc42-86138dc25d8b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.423477 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c26ddb-8907-4b44-bc42-86138dc25d8b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.423738 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c26ddb-8907-4b44-bc42-86138dc25d8b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.424714 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.427477 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.427543 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.428387 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.428593 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.429809 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.429931 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.429809 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.430099 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.446953 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6"] Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478551 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478675 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2dx\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-kube-api-access-6j2dx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478806 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478838 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478891 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.478959 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.479079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.479180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.479243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:14 crc kubenswrapper[4796]: I1212 05:08:14.479375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280466 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280519 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280682 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280737 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280767 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.280815 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2dx\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-kube-api-access-6j2dx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.286607 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.289639 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.316104 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.316713 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2dx\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-kube-api-access-6j2dx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.326330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.330552 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.333822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.335151 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.342721 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.344824 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.345130 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.345796 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.346581 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.346723 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.385145 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tqx2x"] Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.388606 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.433048 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqx2x"] Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.586364 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rj87\" (UniqueName: \"kubernetes.io/projected/ce12db34-77c0-41d3-a120-db3c0eeda07b-kube-api-access-5rj87\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.586425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-utilities\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.586443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-catalog-content\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.644789 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.688317 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rj87\" (UniqueName: \"kubernetes.io/projected/ce12db34-77c0-41d3-a120-db3c0eeda07b-kube-api-access-5rj87\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.688372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-utilities\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.688396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-catalog-content\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.688889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-catalog-content\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.689569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-utilities\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.710597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rj87\" (UniqueName: \"kubernetes.io/projected/ce12db34-77c0-41d3-a120-db3c0eeda07b-kube-api-access-5rj87\") pod \"redhat-operators-tqx2x\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:15 crc kubenswrapper[4796]: I1212 05:08:15.760398 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:16 crc kubenswrapper[4796]: I1212 05:08:16.332261 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6"] Dec 12 05:08:16 crc kubenswrapper[4796]: I1212 05:08:16.346886 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqx2x"] Dec 12 05:08:16 crc kubenswrapper[4796]: I1212 05:08:16.383196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" event={"ID":"f181d2cb-61a4-4328-88b8-18dd8cd24228","Type":"ContainerStarted","Data":"88461fea0d7a9d96d38cca58e1dca390d6eb5ddfadeb94c0ac82487d4f18100b"} Dec 12 05:08:16 crc kubenswrapper[4796]: I1212 05:08:16.387103 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerStarted","Data":"8a8a38f33b87710c33d4b832a00cb1d9c9bb48715ec1ebd63da232bfbfa421cf"} Dec 12 05:08:17 crc kubenswrapper[4796]: I1212 05:08:17.398386 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerID="36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa" exitCode=0 Dec 12 05:08:17 crc kubenswrapper[4796]: I1212 05:08:17.398433 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerDied","Data":"36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa"} Dec 12 05:08:17 crc kubenswrapper[4796]: I1212 05:08:17.401439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" event={"ID":"f181d2cb-61a4-4328-88b8-18dd8cd24228","Type":"ContainerStarted","Data":"f0abd416cb61910178b77710e58a37551369e2b056470ecc63fa6fc1e24897db"} Dec 12 05:08:17 crc kubenswrapper[4796]: I1212 05:08:17.469997 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" podStartSLOduration=3.263553502 podStartE2EDuration="3.469974241s" podCreationTimestamp="2025-12-12 05:08:14 +0000 UTC" firstStartedPulling="2025-12-12 05:08:16.370371617 +0000 UTC m=+2087.246388764" lastFinishedPulling="2025-12-12 05:08:16.576792356 +0000 UTC m=+2087.452809503" observedRunningTime="2025-12-12 05:08:17.446693742 +0000 UTC m=+2088.322710899" watchObservedRunningTime="2025-12-12 05:08:17.469974241 +0000 UTC m=+2088.345991398" Dec 12 05:08:19 crc kubenswrapper[4796]: I1212 05:08:19.426157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerStarted","Data":"5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb"} Dec 12 05:08:22 crc kubenswrapper[4796]: I1212 05:08:22.450888 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerID="5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb" exitCode=0 Dec 12 05:08:22 crc kubenswrapper[4796]: I1212 05:08:22.451360 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerDied","Data":"5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb"} Dec 12 05:08:23 crc kubenswrapper[4796]: I1212 05:08:23.490020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerStarted","Data":"29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8"} Dec 12 05:08:23 crc kubenswrapper[4796]: I1212 05:08:23.511275 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tqx2x" podStartSLOduration=2.81799353 podStartE2EDuration="8.511258049s" podCreationTimestamp="2025-12-12 05:08:15 +0000 UTC" firstStartedPulling="2025-12-12 05:08:17.402451028 +0000 UTC m=+2088.278468175" lastFinishedPulling="2025-12-12 05:08:23.095715547 +0000 UTC m=+2093.971732694" observedRunningTime="2025-12-12 05:08:23.507044996 +0000 UTC m=+2094.383062143" watchObservedRunningTime="2025-12-12 05:08:23.511258049 +0000 UTC m=+2094.387275196" Dec 12 05:08:25 crc kubenswrapper[4796]: I1212 05:08:25.760598 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:25 crc kubenswrapper[4796]: I1212 05:08:25.760905 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:26 crc kubenswrapper[4796]: I1212 05:08:26.803368 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqx2x" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="registry-server" probeResult="failure" output=< Dec 12 05:08:26 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 05:08:26 crc kubenswrapper[4796]: > Dec 12 05:08:35 crc kubenswrapper[4796]: I1212 05:08:35.829592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:35 crc kubenswrapper[4796]: I1212 05:08:35.897935 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:36 crc kubenswrapper[4796]: I1212 05:08:36.534078 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqx2x"] Dec 12 05:08:37 crc kubenswrapper[4796]: I1212 05:08:37.599330 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tqx2x" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="registry-server" containerID="cri-o://29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8" gracePeriod=2 Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.107815 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.203970 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rj87\" (UniqueName: \"kubernetes.io/projected/ce12db34-77c0-41d3-a120-db3c0eeda07b-kube-api-access-5rj87\") pod \"ce12db34-77c0-41d3-a120-db3c0eeda07b\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.204180 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-utilities\") pod \"ce12db34-77c0-41d3-a120-db3c0eeda07b\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.204199 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-catalog-content\") pod \"ce12db34-77c0-41d3-a120-db3c0eeda07b\" (UID: \"ce12db34-77c0-41d3-a120-db3c0eeda07b\") " Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.206994 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-utilities" (OuterVolumeSpecName: "utilities") pod "ce12db34-77c0-41d3-a120-db3c0eeda07b" (UID: "ce12db34-77c0-41d3-a120-db3c0eeda07b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.210908 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce12db34-77c0-41d3-a120-db3c0eeda07b-kube-api-access-5rj87" (OuterVolumeSpecName: "kube-api-access-5rj87") pod "ce12db34-77c0-41d3-a120-db3c0eeda07b" (UID: "ce12db34-77c0-41d3-a120-db3c0eeda07b"). InnerVolumeSpecName "kube-api-access-5rj87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.306591 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.307007 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rj87\" (UniqueName: \"kubernetes.io/projected/ce12db34-77c0-41d3-a120-db3c0eeda07b-kube-api-access-5rj87\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.339446 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce12db34-77c0-41d3-a120-db3c0eeda07b" (UID: "ce12db34-77c0-41d3-a120-db3c0eeda07b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.409004 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce12db34-77c0-41d3-a120-db3c0eeda07b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.609695 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerID="29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8" exitCode=0 Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.609736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerDied","Data":"29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8"} Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.609764 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqx2x" event={"ID":"ce12db34-77c0-41d3-a120-db3c0eeda07b","Type":"ContainerDied","Data":"8a8a38f33b87710c33d4b832a00cb1d9c9bb48715ec1ebd63da232bfbfa421cf"} Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.609762 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqx2x" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.609782 4796 scope.go:117] "RemoveContainer" containerID="29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.659417 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqx2x"] Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.688016 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tqx2x"] Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.692472 4796 scope.go:117] "RemoveContainer" containerID="5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.777405 4796 scope.go:117] "RemoveContainer" containerID="36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.818516 4796 scope.go:117] "RemoveContainer" containerID="29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8" Dec 12 05:08:38 crc kubenswrapper[4796]: E1212 05:08:38.819015 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8\": container with ID starting with 29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8 not found: ID does not exist" containerID="29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.819048 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8"} err="failed to get container status \"29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8\": rpc error: code = NotFound desc = could not find container \"29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8\": container with ID starting with 29ef9ed53cff4bd1d3b14a0af53e49c3f42c59b9713a6eea9ed6a1431454eaa8 not found: ID does not exist" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.819075 4796 scope.go:117] "RemoveContainer" containerID="5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb" Dec 12 05:08:38 crc kubenswrapper[4796]: E1212 05:08:38.819359 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb\": container with ID starting with 5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb not found: ID does not exist" containerID="5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.819386 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb"} err="failed to get container status \"5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb\": rpc error: code = NotFound desc = could not find container \"5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb\": container with ID starting with 5c2cba3e843b9d520c9f605b2046472f64a4c236eb6b94acaa7a2fd12c7bc1bb not found: ID does not exist" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.819406 4796 scope.go:117] "RemoveContainer" containerID="36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa" Dec 12 05:08:38 crc kubenswrapper[4796]: E1212 05:08:38.819619 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa\": container with ID starting with 36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa not found: ID does not exist" containerID="36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa" Dec 12 05:08:38 crc kubenswrapper[4796]: I1212 05:08:38.819643 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa"} err="failed to get container status \"36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa\": rpc error: code = NotFound desc = could not find container \"36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa\": container with ID starting with 36d407b3ea90625b939fe4fed3cb7f94709837e704c63e1723a11321ede4d6fa not found: ID does not exist" Dec 12 05:08:39 crc kubenswrapper[4796]: I1212 05:08:39.422884 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" path="/var/lib/kubelet/pods/ce12db34-77c0-41d3-a120-db3c0eeda07b/volumes" Dec 12 05:08:58 crc kubenswrapper[4796]: I1212 05:08:58.810967 4796 generic.go:334] "Generic (PLEG): container finished" podID="f181d2cb-61a4-4328-88b8-18dd8cd24228" containerID="f0abd416cb61910178b77710e58a37551369e2b056470ecc63fa6fc1e24897db" exitCode=0 Dec 12 05:08:58 crc kubenswrapper[4796]: I1212 05:08:58.811101 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" event={"ID":"f181d2cb-61a4-4328-88b8-18dd8cd24228","Type":"ContainerDied","Data":"f0abd416cb61910178b77710e58a37551369e2b056470ecc63fa6fc1e24897db"} Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.262933 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.334800 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2dx\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-kube-api-access-6j2dx\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.334846 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-neutron-metadata-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.334865 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-telemetry-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.334925 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ovn-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.334944 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-bootstrap-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.334986 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-libvirt-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335015 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-inventory\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335066 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-repo-setup-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ssh-key\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335126 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335155 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-nova-combined-ca-bundle\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335228 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.335256 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f181d2cb-61a4-4328-88b8-18dd8cd24228\" (UID: \"f181d2cb-61a4-4328-88b8-18dd8cd24228\") " Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.343645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.349864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.349877 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.349976 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-kube-api-access-6j2dx" (OuterVolumeSpecName: "kube-api-access-6j2dx") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "kube-api-access-6j2dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.350372 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.350464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.350691 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.350716 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.352729 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.353367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.358251 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.359449 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.373776 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-inventory" (OuterVolumeSpecName: "inventory") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.390751 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f181d2cb-61a4-4328-88b8-18dd8cd24228" (UID: "f181d2cb-61a4-4328-88b8-18dd8cd24228"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439088 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439124 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439135 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j2dx\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-kube-api-access-6j2dx\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439146 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439155 4796 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439164 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439173 4796 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439181 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439190 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439198 4796 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439206 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439217 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439226 4796 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f181d2cb-61a4-4328-88b8-18dd8cd24228-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.439236 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f181d2cb-61a4-4328-88b8-18dd8cd24228-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.841857 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" event={"ID":"f181d2cb-61a4-4328-88b8-18dd8cd24228","Type":"ContainerDied","Data":"88461fea0d7a9d96d38cca58e1dca390d6eb5ddfadeb94c0ac82487d4f18100b"} Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.842082 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88461fea0d7a9d96d38cca58e1dca390d6eb5ddfadeb94c0ac82487d4f18100b" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.842083 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978207 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x"] Dec 12 05:09:00 crc kubenswrapper[4796]: E1212 05:09:00.978708 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="extract-utilities" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978729 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="extract-utilities" Dec 12 05:09:00 crc kubenswrapper[4796]: E1212 05:09:00.978743 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f181d2cb-61a4-4328-88b8-18dd8cd24228" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978753 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f181d2cb-61a4-4328-88b8-18dd8cd24228" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 12 05:09:00 crc kubenswrapper[4796]: E1212 05:09:00.978772 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="registry-server" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978778 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="registry-server" Dec 12 05:09:00 crc kubenswrapper[4796]: E1212 05:09:00.978804 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="extract-content" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978810 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="extract-content" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978981 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f181d2cb-61a4-4328-88b8-18dd8cd24228" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.978993 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce12db34-77c0-41d3-a120-db3c0eeda07b" containerName="registry-server" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.979811 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.983056 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.983144 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.983269 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.983310 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.983452 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:09:00 crc kubenswrapper[4796]: I1212 05:09:00.986082 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x"] Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.048826 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.049143 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.049290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhjh\" (UniqueName: \"kubernetes.io/projected/60d6d74d-f5f7-43c4-8462-f073926de480-kube-api-access-mkhjh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.049333 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60d6d74d-f5f7-43c4-8462-f073926de480-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.049438 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.150958 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60d6d74d-f5f7-43c4-8462-f073926de480-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.151659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkhjh\" (UniqueName: \"kubernetes.io/projected/60d6d74d-f5f7-43c4-8462-f073926de480-kube-api-access-mkhjh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.151960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.152038 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.152290 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.152508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60d6d74d-f5f7-43c4-8462-f073926de480-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.156411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.159803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.164847 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.168772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkhjh\" (UniqueName: \"kubernetes.io/projected/60d6d74d-f5f7-43c4-8462-f073926de480-kube-api-access-mkhjh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tkc6x\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.316658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.842861 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x"] Dec 12 05:09:01 crc kubenswrapper[4796]: I1212 05:09:01.853539 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" event={"ID":"60d6d74d-f5f7-43c4-8462-f073926de480","Type":"ContainerStarted","Data":"a5579714952a3ef783c7eb325b8fa7f20c17936bc68ecd92742c39332e82d67b"} Dec 12 05:09:02 crc kubenswrapper[4796]: I1212 05:09:02.865373 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" event={"ID":"60d6d74d-f5f7-43c4-8462-f073926de480","Type":"ContainerStarted","Data":"f1fbba5756f39fb5b0735970544eaeeffa3b5c3a99424a190da673e7c6e7d585"} Dec 12 05:09:02 crc kubenswrapper[4796]: I1212 05:09:02.893759 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" podStartSLOduration=2.73894073 podStartE2EDuration="2.893740013s" podCreationTimestamp="2025-12-12 05:09:00 +0000 UTC" firstStartedPulling="2025-12-12 05:09:01.846984993 +0000 UTC m=+2132.723002140" lastFinishedPulling="2025-12-12 05:09:02.001784266 +0000 UTC m=+2132.877801423" observedRunningTime="2025-12-12 05:09:02.893411224 +0000 UTC m=+2133.769428371" watchObservedRunningTime="2025-12-12 05:09:02.893740013 +0000 UTC m=+2133.769757170" Dec 12 05:09:02 crc kubenswrapper[4796]: I1212 05:09:02.969899 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:09:02 crc kubenswrapper[4796]: I1212 05:09:02.969973 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.646470 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k"] Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.647237 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" podUID="3aebcd83-2d6f-4f3d-a28d-313b3756c65f" containerName="controller-manager" containerID="cri-o://c05a1403803fee7638f14450d31798394768fa8e6533b9fc10b87d39035570bc" gracePeriod=30 Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.771006 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t"] Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.771235 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" podUID="edc895ae-37e3-47e3-a313-91e2d7d25ee8" containerName="route-controller-manager" containerID="cri-o://00e06dd8dcb6fad7ae180a5c219b6931eb340158742998e700cd8eb7ea01ad2a" gracePeriod=30 Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.914597 4796 generic.go:334] "Generic (PLEG): container finished" podID="edc895ae-37e3-47e3-a313-91e2d7d25ee8" containerID="00e06dd8dcb6fad7ae180a5c219b6931eb340158742998e700cd8eb7ea01ad2a" exitCode=0 Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.914676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" event={"ID":"edc895ae-37e3-47e3-a313-91e2d7d25ee8","Type":"ContainerDied","Data":"00e06dd8dcb6fad7ae180a5c219b6931eb340158742998e700cd8eb7ea01ad2a"} Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.919905 4796 generic.go:334] "Generic (PLEG): container finished" podID="3aebcd83-2d6f-4f3d-a28d-313b3756c65f" containerID="c05a1403803fee7638f14450d31798394768fa8e6533b9fc10b87d39035570bc" exitCode=0 Dec 12 05:09:07 crc kubenswrapper[4796]: I1212 05:09:07.919955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" event={"ID":"3aebcd83-2d6f-4f3d-a28d-313b3756c65f","Type":"ContainerDied","Data":"c05a1403803fee7638f14450d31798394768fa8e6533b9fc10b87d39035570bc"} Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.156949 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.250419 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.301998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc895ae-37e3-47e3-a313-91e2d7d25ee8-serving-cert\") pod \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-client-ca\") pod \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302132 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkmr\" (UniqueName: \"kubernetes.io/projected/edc895ae-37e3-47e3-a313-91e2d7d25ee8-kube-api-access-5gkmr\") pod \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302192 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-config\") pod \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-proxy-ca-bundles\") pod \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56vwh\" (UniqueName: \"kubernetes.io/projected/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-kube-api-access-56vwh\") pod \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302404 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-client-ca\") pod \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302461 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-serving-cert\") pod \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\" (UID: \"3aebcd83-2d6f-4f3d-a28d-313b3756c65f\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.302496 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-config\") pod \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\" (UID: \"edc895ae-37e3-47e3-a313-91e2d7d25ee8\") " Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.303536 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-config" (OuterVolumeSpecName: "config") pod "edc895ae-37e3-47e3-a313-91e2d7d25ee8" (UID: "edc895ae-37e3-47e3-a313-91e2d7d25ee8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.303538 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-client-ca" (OuterVolumeSpecName: "client-ca") pod "edc895ae-37e3-47e3-a313-91e2d7d25ee8" (UID: "edc895ae-37e3-47e3-a313-91e2d7d25ee8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.303627 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-client-ca" (OuterVolumeSpecName: "client-ca") pod "3aebcd83-2d6f-4f3d-a28d-313b3756c65f" (UID: "3aebcd83-2d6f-4f3d-a28d-313b3756c65f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.303614 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3aebcd83-2d6f-4f3d-a28d-313b3756c65f" (UID: "3aebcd83-2d6f-4f3d-a28d-313b3756c65f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.303823 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-config" (OuterVolumeSpecName: "config") pod "3aebcd83-2d6f-4f3d-a28d-313b3756c65f" (UID: "3aebcd83-2d6f-4f3d-a28d-313b3756c65f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.308571 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc895ae-37e3-47e3-a313-91e2d7d25ee8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "edc895ae-37e3-47e3-a313-91e2d7d25ee8" (UID: "edc895ae-37e3-47e3-a313-91e2d7d25ee8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.309101 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3aebcd83-2d6f-4f3d-a28d-313b3756c65f" (UID: "3aebcd83-2d6f-4f3d-a28d-313b3756c65f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.309128 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-kube-api-access-56vwh" (OuterVolumeSpecName: "kube-api-access-56vwh") pod "3aebcd83-2d6f-4f3d-a28d-313b3756c65f" (UID: "3aebcd83-2d6f-4f3d-a28d-313b3756c65f"). InnerVolumeSpecName "kube-api-access-56vwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.313802 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc895ae-37e3-47e3-a313-91e2d7d25ee8-kube-api-access-5gkmr" (OuterVolumeSpecName: "kube-api-access-5gkmr") pod "edc895ae-37e3-47e3-a313-91e2d7d25ee8" (UID: "edc895ae-37e3-47e3-a313-91e2d7d25ee8"). InnerVolumeSpecName "kube-api-access-5gkmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404750 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404814 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56vwh\" (UniqueName: \"kubernetes.io/projected/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-kube-api-access-56vwh\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404833 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404847 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404861 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc895ae-37e3-47e3-a313-91e2d7d25ee8-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404873 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc895ae-37e3-47e3-a313-91e2d7d25ee8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404885 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404897 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkmr\" (UniqueName: \"kubernetes.io/projected/edc895ae-37e3-47e3-a313-91e2d7d25ee8-kube-api-access-5gkmr\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.404912 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aebcd83-2d6f-4f3d-a28d-313b3756c65f-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.930328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" event={"ID":"edc895ae-37e3-47e3-a313-91e2d7d25ee8","Type":"ContainerDied","Data":"1bb742af648a7ee6496b2e14d2489c8ded33dd21e9799d0a7d3552dd993a14b2"} Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.930407 4796 scope.go:117] "RemoveContainer" containerID="00e06dd8dcb6fad7ae180a5c219b6931eb340158742998e700cd8eb7ea01ad2a" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.930343 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.933395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" event={"ID":"3aebcd83-2d6f-4f3d-a28d-313b3756c65f","Type":"ContainerDied","Data":"5ec00755f78aa59adc873d5ab8b76f308bf9cf51809a2c65ada8f9d858e53c83"} Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.933479 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.966441 4796 scope.go:117] "RemoveContainer" containerID="c05a1403803fee7638f14450d31798394768fa8e6533b9fc10b87d39035570bc" Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.982159 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k"] Dec 12 05:09:08 crc kubenswrapper[4796]: I1212 05:09:08.992479 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-859bf9f4f9-v8z9k"] Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:08.999979 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t"] Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.010541 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8b6b7498-4pp6t"] Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.329849 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-746fb77d75-6rkkc"] Dec 12 05:09:09 crc kubenswrapper[4796]: E1212 05:09:09.330696 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc895ae-37e3-47e3-a313-91e2d7d25ee8" containerName="route-controller-manager" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.330721 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc895ae-37e3-47e3-a313-91e2d7d25ee8" containerName="route-controller-manager" Dec 12 05:09:09 crc kubenswrapper[4796]: E1212 05:09:09.330744 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aebcd83-2d6f-4f3d-a28d-313b3756c65f" containerName="controller-manager" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.330756 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aebcd83-2d6f-4f3d-a28d-313b3756c65f" containerName="controller-manager" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.331017 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aebcd83-2d6f-4f3d-a28d-313b3756c65f" containerName="controller-manager" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.331042 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc895ae-37e3-47e3-a313-91e2d7d25ee8" containerName="route-controller-manager" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.331862 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.337688 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.337905 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.338170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.341246 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.341605 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x"] Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.342268 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.343109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.344092 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.350240 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.350241 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.350495 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.350756 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.350770 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.350921 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.357207 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-746fb77d75-6rkkc"] Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.357533 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.367263 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x"] Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.423376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f20baac-2690-4308-ae06-12d12d6c063f-serving-cert\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.425312 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-config\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.425475 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xpb\" (UniqueName: \"kubernetes.io/projected/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-kube-api-access-54xpb\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.425606 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-client-ca\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.424909 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aebcd83-2d6f-4f3d-a28d-313b3756c65f" path="/var/lib/kubelet/pods/3aebcd83-2d6f-4f3d-a28d-313b3756c65f/volumes" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.425813 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-config\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.425899 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlbc\" (UniqueName: \"kubernetes.io/projected/2f20baac-2690-4308-ae06-12d12d6c063f-kube-api-access-fjlbc\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.426030 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-client-ca\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.426100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-proxy-ca-bundles\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.426197 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-serving-cert\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.426854 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc895ae-37e3-47e3-a313-91e2d7d25ee8" path="/var/lib/kubelet/pods/edc895ae-37e3-47e3-a313-91e2d7d25ee8/volumes" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-config\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlbc\" (UniqueName: \"kubernetes.io/projected/2f20baac-2690-4308-ae06-12d12d6c063f-kube-api-access-fjlbc\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-client-ca\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528690 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-proxy-ca-bundles\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-serving-cert\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f20baac-2690-4308-ae06-12d12d6c063f-serving-cert\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-config\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54xpb\" (UniqueName: \"kubernetes.io/projected/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-kube-api-access-54xpb\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.528879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-client-ca\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.529705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-client-ca\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.529861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-config\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.530539 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-config\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.530555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-client-ca\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.530936 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-proxy-ca-bundles\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.537361 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-serving-cert\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.537994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f20baac-2690-4308-ae06-12d12d6c063f-serving-cert\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.553272 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlbc\" (UniqueName: \"kubernetes.io/projected/2f20baac-2690-4308-ae06-12d12d6c063f-kube-api-access-fjlbc\") pod \"route-controller-manager-59df47db45-p4r5x\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.553796 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54xpb\" (UniqueName: \"kubernetes.io/projected/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-kube-api-access-54xpb\") pod \"controller-manager-746fb77d75-6rkkc\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.653609 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:09 crc kubenswrapper[4796]: I1212 05:09:09.670681 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.226210 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-746fb77d75-6rkkc"] Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.345196 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x"] Dec 12 05:09:10 crc kubenswrapper[4796]: W1212 05:09:10.348489 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f20baac_2690_4308_ae06_12d12d6c063f.slice/crio-13bda1aca89a88f09db014f0828d65207d906cd883535d45fe756e466475987f WatchSource:0}: Error finding container 13bda1aca89a88f09db014f0828d65207d906cd883535d45fe756e466475987f: Status 404 returned error can't find the container with id 13bda1aca89a88f09db014f0828d65207d906cd883535d45fe756e466475987f Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.974392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" event={"ID":"2f20baac-2690-4308-ae06-12d12d6c063f","Type":"ContainerStarted","Data":"a71a54cb0e714118563657c272198d727c77f2364b2441ca10fb674baa14fa56"} Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.974721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" event={"ID":"2f20baac-2690-4308-ae06-12d12d6c063f","Type":"ContainerStarted","Data":"13bda1aca89a88f09db014f0828d65207d906cd883535d45fe756e466475987f"} Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.975377 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.976666 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" event={"ID":"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad","Type":"ContainerStarted","Data":"b14dbfa8b452a3091e6e5d12157eaf3d2d8c1f55bb0756e10dd0c9fbea9938d0"} Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.976696 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" event={"ID":"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad","Type":"ContainerStarted","Data":"9fd0f353dd966e21eaebbb765e9832858625e1628f11431ab949c2e7f1a987ea"} Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.977732 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:10 crc kubenswrapper[4796]: I1212 05:09:10.984109 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:11 crc kubenswrapper[4796]: I1212 05:09:11.015466 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" podStartSLOduration=4.015446021 podStartE2EDuration="4.015446021s" podCreationTimestamp="2025-12-12 05:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:09:11.011565171 +0000 UTC m=+2141.887582318" watchObservedRunningTime="2025-12-12 05:09:11.015446021 +0000 UTC m=+2141.891463168" Dec 12 05:09:11 crc kubenswrapper[4796]: I1212 05:09:11.063821 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" podStartSLOduration=4.063799875 podStartE2EDuration="4.063799875s" podCreationTimestamp="2025-12-12 05:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:09:11.062543756 +0000 UTC m=+2141.938560913" watchObservedRunningTime="2025-12-12 05:09:11.063799875 +0000 UTC m=+2141.939817022" Dec 12 05:09:11 crc kubenswrapper[4796]: I1212 05:09:11.660468 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.136891 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wtk8r"] Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.139879 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.148576 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtk8r"] Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.205614 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-utilities\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.205755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-catalog-content\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.205930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2bmq\" (UniqueName: \"kubernetes.io/projected/5a590ef1-1118-4ec1-a99a-4e6d07c87414-kube-api-access-k2bmq\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.308044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2bmq\" (UniqueName: \"kubernetes.io/projected/5a590ef1-1118-4ec1-a99a-4e6d07c87414-kube-api-access-k2bmq\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.308650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-utilities\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.308855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-catalog-content\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.309169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-utilities\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.309351 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-catalog-content\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.328108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2bmq\" (UniqueName: \"kubernetes.io/projected/5a590ef1-1118-4ec1-a99a-4e6d07c87414-kube-api-access-k2bmq\") pod \"redhat-marketplace-wtk8r\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:13 crc kubenswrapper[4796]: I1212 05:09:13.477199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:14 crc kubenswrapper[4796]: I1212 05:09:14.018787 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtk8r"] Dec 12 05:09:15 crc kubenswrapper[4796]: I1212 05:09:15.016576 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerID="621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d" exitCode=0 Dec 12 05:09:15 crc kubenswrapper[4796]: I1212 05:09:15.016655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerDied","Data":"621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d"} Dec 12 05:09:15 crc kubenswrapper[4796]: I1212 05:09:15.019636 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerStarted","Data":"2cc3a7d07a1df9b17b6420c67b039b2a0f5538fd757a29223570dbe0f060e3dc"} Dec 12 05:09:16 crc kubenswrapper[4796]: I1212 05:09:16.032166 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerStarted","Data":"7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478"} Dec 12 05:09:17 crc kubenswrapper[4796]: I1212 05:09:17.055960 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerID="7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478" exitCode=0 Dec 12 05:09:17 crc kubenswrapper[4796]: I1212 05:09:17.056013 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerDied","Data":"7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478"} Dec 12 05:09:18 crc kubenswrapper[4796]: I1212 05:09:18.069057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerStarted","Data":"328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa"} Dec 12 05:09:18 crc kubenswrapper[4796]: I1212 05:09:18.097148 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wtk8r" podStartSLOduration=2.466384412 podStartE2EDuration="5.097122701s" podCreationTimestamp="2025-12-12 05:09:13 +0000 UTC" firstStartedPulling="2025-12-12 05:09:15.018410026 +0000 UTC m=+2145.894427173" lastFinishedPulling="2025-12-12 05:09:17.649148305 +0000 UTC m=+2148.525165462" observedRunningTime="2025-12-12 05:09:18.089537384 +0000 UTC m=+2148.965554541" watchObservedRunningTime="2025-12-12 05:09:18.097122701 +0000 UTC m=+2148.973139868" Dec 12 05:09:23 crc kubenswrapper[4796]: I1212 05:09:23.478766 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:23 crc kubenswrapper[4796]: I1212 05:09:23.480373 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:23 crc kubenswrapper[4796]: I1212 05:09:23.530012 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:24 crc kubenswrapper[4796]: I1212 05:09:24.175792 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:24 crc kubenswrapper[4796]: I1212 05:09:24.231949 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtk8r"] Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.145229 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wtk8r" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="registry-server" containerID="cri-o://328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa" gracePeriod=2 Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.720664 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.784606 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2bmq\" (UniqueName: \"kubernetes.io/projected/5a590ef1-1118-4ec1-a99a-4e6d07c87414-kube-api-access-k2bmq\") pod \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.784906 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-catalog-content\") pod \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.785128 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-utilities\") pod \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\" (UID: \"5a590ef1-1118-4ec1-a99a-4e6d07c87414\") " Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.785914 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-utilities" (OuterVolumeSpecName: "utilities") pod "5a590ef1-1118-4ec1-a99a-4e6d07c87414" (UID: "5a590ef1-1118-4ec1-a99a-4e6d07c87414"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.791394 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a590ef1-1118-4ec1-a99a-4e6d07c87414-kube-api-access-k2bmq" (OuterVolumeSpecName: "kube-api-access-k2bmq") pod "5a590ef1-1118-4ec1-a99a-4e6d07c87414" (UID: "5a590ef1-1118-4ec1-a99a-4e6d07c87414"). InnerVolumeSpecName "kube-api-access-k2bmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.813402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a590ef1-1118-4ec1-a99a-4e6d07c87414" (UID: "5a590ef1-1118-4ec1-a99a-4e6d07c87414"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.887133 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.887173 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a590ef1-1118-4ec1-a99a-4e6d07c87414-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:26 crc kubenswrapper[4796]: I1212 05:09:26.887186 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2bmq\" (UniqueName: \"kubernetes.io/projected/5a590ef1-1118-4ec1-a99a-4e6d07c87414-kube-api-access-k2bmq\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.154147 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerID="328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa" exitCode=0 Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.154249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerDied","Data":"328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa"} Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.154457 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtk8r" event={"ID":"5a590ef1-1118-4ec1-a99a-4e6d07c87414","Type":"ContainerDied","Data":"2cc3a7d07a1df9b17b6420c67b039b2a0f5538fd757a29223570dbe0f060e3dc"} Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.154476 4796 scope.go:117] "RemoveContainer" containerID="328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.154297 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtk8r" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.176178 4796 scope.go:117] "RemoveContainer" containerID="7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.200802 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtk8r"] Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.201419 4796 scope.go:117] "RemoveContainer" containerID="621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.208937 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtk8r"] Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.243220 4796 scope.go:117] "RemoveContainer" containerID="328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa" Dec 12 05:09:27 crc kubenswrapper[4796]: E1212 05:09:27.243798 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa\": container with ID starting with 328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa not found: ID does not exist" containerID="328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.243847 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa"} err="failed to get container status \"328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa\": rpc error: code = NotFound desc = could not find container \"328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa\": container with ID starting with 328405577dea4b0ad0cecd821ef3f26918e64bce46d4547bdece8272de1066aa not found: ID does not exist" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.243872 4796 scope.go:117] "RemoveContainer" containerID="7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478" Dec 12 05:09:27 crc kubenswrapper[4796]: E1212 05:09:27.244266 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478\": container with ID starting with 7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478 not found: ID does not exist" containerID="7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.244325 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478"} err="failed to get container status \"7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478\": rpc error: code = NotFound desc = could not find container \"7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478\": container with ID starting with 7d2226313605eec334cdd4b15bb335df83b91ad86fb6fe813693cefda2b82478 not found: ID does not exist" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.244352 4796 scope.go:117] "RemoveContainer" containerID="621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d" Dec 12 05:09:27 crc kubenswrapper[4796]: E1212 05:09:27.244690 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d\": container with ID starting with 621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d not found: ID does not exist" containerID="621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.244710 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d"} err="failed to get container status \"621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d\": rpc error: code = NotFound desc = could not find container \"621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d\": container with ID starting with 621a29d4d3de5f22a68d919ed04986ef4c45186dbc8432e37c12aef88f1b7b7d not found: ID does not exist" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.420684 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" path="/var/lib/kubelet/pods/5a590ef1-1118-4ec1-a99a-4e6d07c87414/volumes" Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.636986 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-746fb77d75-6rkkc"] Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.637242 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" podUID="4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" containerName="controller-manager" containerID="cri-o://b14dbfa8b452a3091e6e5d12157eaf3d2d8c1f55bb0756e10dd0c9fbea9938d0" gracePeriod=30 Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.655552 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x"] Dec 12 05:09:27 crc kubenswrapper[4796]: I1212 05:09:27.655743 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" podUID="2f20baac-2690-4308-ae06-12d12d6c063f" containerName="route-controller-manager" containerID="cri-o://a71a54cb0e714118563657c272198d727c77f2364b2441ca10fb674baa14fa56" gracePeriod=30 Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.184267 4796 generic.go:334] "Generic (PLEG): container finished" podID="4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" containerID="b14dbfa8b452a3091e6e5d12157eaf3d2d8c1f55bb0756e10dd0c9fbea9938d0" exitCode=0 Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.184522 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" event={"ID":"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad","Type":"ContainerDied","Data":"b14dbfa8b452a3091e6e5d12157eaf3d2d8c1f55bb0756e10dd0c9fbea9938d0"} Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.186420 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f20baac-2690-4308-ae06-12d12d6c063f" containerID="a71a54cb0e714118563657c272198d727c77f2364b2441ca10fb674baa14fa56" exitCode=0 Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.186461 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" event={"ID":"2f20baac-2690-4308-ae06-12d12d6c063f","Type":"ContainerDied","Data":"a71a54cb0e714118563657c272198d727c77f2364b2441ca10fb674baa14fa56"} Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.315987 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.321825 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.431088 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-client-ca\") pod \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.431210 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-serving-cert\") pod \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.431250 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-config\") pod \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.431288 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-config\") pod \"2f20baac-2690-4308-ae06-12d12d6c063f\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432178 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" (UID: "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432332 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-config" (OuterVolumeSpecName: "config") pod "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" (UID: "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432369 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-config" (OuterVolumeSpecName: "config") pod "2f20baac-2690-4308-ae06-12d12d6c063f" (UID: "2f20baac-2690-4308-ae06-12d12d6c063f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.431462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjlbc\" (UniqueName: \"kubernetes.io/projected/2f20baac-2690-4308-ae06-12d12d6c063f-kube-api-access-fjlbc\") pod \"2f20baac-2690-4308-ae06-12d12d6c063f\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-proxy-ca-bundles\") pod \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432602 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54xpb\" (UniqueName: \"kubernetes.io/projected/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-kube-api-access-54xpb\") pod \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\" (UID: \"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432721 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f20baac-2690-4308-ae06-12d12d6c063f-serving-cert\") pod \"2f20baac-2690-4308-ae06-12d12d6c063f\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.432777 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-client-ca\") pod \"2f20baac-2690-4308-ae06-12d12d6c063f\" (UID: \"2f20baac-2690-4308-ae06-12d12d6c063f\") " Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433279 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" (UID: "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433639 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f20baac-2690-4308-ae06-12d12d6c063f" (UID: "2f20baac-2690-4308-ae06-12d12d6c063f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433917 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433936 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433947 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433961 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f20baac-2690-4308-ae06-12d12d6c063f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.433970 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.441295 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f20baac-2690-4308-ae06-12d12d6c063f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f20baac-2690-4308-ae06-12d12d6c063f" (UID: "2f20baac-2690-4308-ae06-12d12d6c063f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.454053 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f20baac-2690-4308-ae06-12d12d6c063f-kube-api-access-fjlbc" (OuterVolumeSpecName: "kube-api-access-fjlbc") pod "2f20baac-2690-4308-ae06-12d12d6c063f" (UID: "2f20baac-2690-4308-ae06-12d12d6c063f"). InnerVolumeSpecName "kube-api-access-fjlbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.454133 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" (UID: "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.456620 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-kube-api-access-54xpb" (OuterVolumeSpecName: "kube-api-access-54xpb") pod "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" (UID: "4a8ab310-0427-4d67-b01c-a5fae0c6b8ad"). InnerVolumeSpecName "kube-api-access-54xpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.535229 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f20baac-2690-4308-ae06-12d12d6c063f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.535692 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.535716 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjlbc\" (UniqueName: \"kubernetes.io/projected/2f20baac-2690-4308-ae06-12d12d6c063f-kube-api-access-fjlbc\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:28 crc kubenswrapper[4796]: I1212 05:09:28.535730 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54xpb\" (UniqueName: \"kubernetes.io/projected/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad-kube-api-access-54xpb\") on node \"crc\" DevicePath \"\"" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.197964 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.197959 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746fb77d75-6rkkc" event={"ID":"4a8ab310-0427-4d67-b01c-a5fae0c6b8ad","Type":"ContainerDied","Data":"9fd0f353dd966e21eaebbb765e9832858625e1628f11431ab949c2e7f1a987ea"} Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.198113 4796 scope.go:117] "RemoveContainer" containerID="b14dbfa8b452a3091e6e5d12157eaf3d2d8c1f55bb0756e10dd0c9fbea9938d0" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.199855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" event={"ID":"2f20baac-2690-4308-ae06-12d12d6c063f","Type":"ContainerDied","Data":"13bda1aca89a88f09db014f0828d65207d906cd883535d45fe756e466475987f"} Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.199877 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.223378 4796 scope.go:117] "RemoveContainer" containerID="a71a54cb0e714118563657c272198d727c77f2364b2441ca10fb674baa14fa56" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.238373 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-746fb77d75-6rkkc"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.252280 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-746fb77d75-6rkkc"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.260590 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.269970 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df47db45-p4r5x"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360000 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h"] Dec 12 05:09:29 crc kubenswrapper[4796]: E1212 05:09:29.360488 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="extract-content" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360510 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="extract-content" Dec 12 05:09:29 crc kubenswrapper[4796]: E1212 05:09:29.360547 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="registry-server" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360556 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="registry-server" Dec 12 05:09:29 crc kubenswrapper[4796]: E1212 05:09:29.360573 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f20baac-2690-4308-ae06-12d12d6c063f" containerName="route-controller-manager" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360582 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f20baac-2690-4308-ae06-12d12d6c063f" containerName="route-controller-manager" Dec 12 05:09:29 crc kubenswrapper[4796]: E1212 05:09:29.360596 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="extract-utilities" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360604 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="extract-utilities" Dec 12 05:09:29 crc kubenswrapper[4796]: E1212 05:09:29.360623 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" containerName="controller-manager" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360631 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" containerName="controller-manager" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.360868 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a590ef1-1118-4ec1-a99a-4e6d07c87414" containerName="registry-server" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.361777 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f20baac-2690-4308-ae06-12d12d6c063f" containerName="route-controller-manager" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.361805 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" containerName="controller-manager" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.362611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.369679 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.369982 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.370143 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.371084 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.372632 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.373892 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.379401 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.379436 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.380634 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.380798 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.381578 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.381859 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.382099 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.382346 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.382531 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.382725 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.393316 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h"] Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.447128 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f20baac-2690-4308-ae06-12d12d6c063f" path="/var/lib/kubelet/pods/2f20baac-2690-4308-ae06-12d12d6c063f/volumes" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.448373 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8ab310-0427-4d67-b01c-a5fae0c6b8ad" path="/var/lib/kubelet/pods/4a8ab310-0427-4d67-b01c-a5fae0c6b8ad/volumes" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.454210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-proxy-ca-bundles\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.454519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-client-ca\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.454647 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115ae990-f683-4fd1-a4e4-4eef88a10f24-serving-cert\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.454788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-config\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.454936 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkk8\" (UniqueName: \"kubernetes.io/projected/115ae990-f683-4fd1-a4e4-4eef88a10f24-kube-api-access-gqkk8\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.455099 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-client-ca\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.455184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd00450-0856-4304-87dc-49a681645acd-serving-cert\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.455268 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4c9m\" (UniqueName: \"kubernetes.io/projected/2cd00450-0856-4304-87dc-49a681645acd-kube-api-access-g4c9m\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.455381 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-config\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.557765 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-config\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.557879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-proxy-ca-bundles\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.557903 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-client-ca\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.557920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115ae990-f683-4fd1-a4e4-4eef88a10f24-serving-cert\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.557952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-config\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.557977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkk8\" (UniqueName: \"kubernetes.io/projected/115ae990-f683-4fd1-a4e4-4eef88a10f24-kube-api-access-gqkk8\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.558022 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-client-ca\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.558042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd00450-0856-4304-87dc-49a681645acd-serving-cert\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.558061 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4c9m\" (UniqueName: \"kubernetes.io/projected/2cd00450-0856-4304-87dc-49a681645acd-kube-api-access-g4c9m\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.559089 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-client-ca\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.559258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-proxy-ca-bundles\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.559674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-client-ca\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.559821 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-config\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.559941 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-config\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.563913 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115ae990-f683-4fd1-a4e4-4eef88a10f24-serving-cert\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.580670 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd00450-0856-4304-87dc-49a681645acd-serving-cert\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.580736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4c9m\" (UniqueName: \"kubernetes.io/projected/2cd00450-0856-4304-87dc-49a681645acd-kube-api-access-g4c9m\") pod \"controller-manager-6d64c9b9bf-59k6h\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.581418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkk8\" (UniqueName: \"kubernetes.io/projected/115ae990-f683-4fd1-a4e4-4eef88a10f24-kube-api-access-gqkk8\") pod \"route-controller-manager-5b6d7979b4-8d46x\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.694824 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:29 crc kubenswrapper[4796]: I1212 05:09:29.714904 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:30 crc kubenswrapper[4796]: I1212 05:09:30.167329 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h"] Dec 12 05:09:30 crc kubenswrapper[4796]: I1212 05:09:30.226214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" event={"ID":"2cd00450-0856-4304-87dc-49a681645acd","Type":"ContainerStarted","Data":"605c725aff246c0859c16e43406aa3e452b7131cc5003919bf7946f21dff5394"} Dec 12 05:09:30 crc kubenswrapper[4796]: I1212 05:09:30.241829 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x"] Dec 12 05:09:30 crc kubenswrapper[4796]: W1212 05:09:30.256531 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod115ae990_f683_4fd1_a4e4_4eef88a10f24.slice/crio-deaf2967ddcefd775f16227c8c6a6568907c3bc041b99c32bac9ed900c07abf8 WatchSource:0}: Error finding container deaf2967ddcefd775f16227c8c6a6568907c3bc041b99c32bac9ed900c07abf8: Status 404 returned error can't find the container with id deaf2967ddcefd775f16227c8c6a6568907c3bc041b99c32bac9ed900c07abf8 Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.262910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" event={"ID":"115ae990-f683-4fd1-a4e4-4eef88a10f24","Type":"ContainerStarted","Data":"85c3993a031cc83eccb973b69625f3b1391af83b656e796008df0e838bf4f9a7"} Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.263240 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" event={"ID":"115ae990-f683-4fd1-a4e4-4eef88a10f24","Type":"ContainerStarted","Data":"deaf2967ddcefd775f16227c8c6a6568907c3bc041b99c32bac9ed900c07abf8"} Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.263592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.270416 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" event={"ID":"2cd00450-0856-4304-87dc-49a681645acd","Type":"ContainerStarted","Data":"9e10cb2a71d2cd1a37c9c140f9806072158e8a2c4c3807d70779b7480944b5f0"} Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.271476 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.276120 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.301456 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.333219 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" podStartSLOduration=4.333199167 podStartE2EDuration="4.333199167s" podCreationTimestamp="2025-12-12 05:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:09:31.290240383 +0000 UTC m=+2162.166257540" watchObservedRunningTime="2025-12-12 05:09:31.333199167 +0000 UTC m=+2162.209216314" Dec 12 05:09:31 crc kubenswrapper[4796]: I1212 05:09:31.426729 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" podStartSLOduration=4.426694533 podStartE2EDuration="4.426694533s" podCreationTimestamp="2025-12-12 05:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:09:31.410948851 +0000 UTC m=+2162.286966018" watchObservedRunningTime="2025-12-12 05:09:31.426694533 +0000 UTC m=+2162.302711680" Dec 12 05:09:32 crc kubenswrapper[4796]: I1212 05:09:32.969523 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:09:32 crc kubenswrapper[4796]: I1212 05:09:32.970414 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.482425 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cxp8x"] Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.484852 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.495064 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxp8x"] Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.602799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-utilities\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.602944 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-catalog-content\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.603020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfwgw\" (UniqueName: \"kubernetes.io/projected/a55fff74-7c41-4c44-adfd-884c99e2bf92-kube-api-access-mfwgw\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.705013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-utilities\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.705375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-catalog-content\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.705447 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfwgw\" (UniqueName: \"kubernetes.io/projected/a55fff74-7c41-4c44-adfd-884c99e2bf92-kube-api-access-mfwgw\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.705743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-utilities\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.705788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-catalog-content\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.735204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfwgw\" (UniqueName: \"kubernetes.io/projected/a55fff74-7c41-4c44-adfd-884c99e2bf92-kube-api-access-mfwgw\") pod \"community-operators-cxp8x\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:49 crc kubenswrapper[4796]: I1212 05:09:49.807085 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:50 crc kubenswrapper[4796]: I1212 05:09:50.464247 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxp8x"] Dec 12 05:09:50 crc kubenswrapper[4796]: I1212 05:09:50.476958 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerStarted","Data":"2b3573e906ac6680c7d5ad43026c378baf3222c589f40210de0ee090c600fe7b"} Dec 12 05:09:51 crc kubenswrapper[4796]: I1212 05:09:51.491159 4796 generic.go:334] "Generic (PLEG): container finished" podID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerID="f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d" exitCode=0 Dec 12 05:09:51 crc kubenswrapper[4796]: I1212 05:09:51.491351 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerDied","Data":"f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d"} Dec 12 05:09:53 crc kubenswrapper[4796]: I1212 05:09:53.510111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerStarted","Data":"3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c"} Dec 12 05:09:54 crc kubenswrapper[4796]: I1212 05:09:54.521466 4796 generic.go:334] "Generic (PLEG): container finished" podID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerID="3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c" exitCode=0 Dec 12 05:09:54 crc kubenswrapper[4796]: I1212 05:09:54.521572 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerDied","Data":"3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c"} Dec 12 05:09:55 crc kubenswrapper[4796]: I1212 05:09:55.552800 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerStarted","Data":"f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843"} Dec 12 05:09:55 crc kubenswrapper[4796]: I1212 05:09:55.587771 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cxp8x" podStartSLOduration=3.02934151 podStartE2EDuration="6.587746034s" podCreationTimestamp="2025-12-12 05:09:49 +0000 UTC" firstStartedPulling="2025-12-12 05:09:51.493482035 +0000 UTC m=+2182.369499182" lastFinishedPulling="2025-12-12 05:09:55.051886559 +0000 UTC m=+2185.927903706" observedRunningTime="2025-12-12 05:09:55.577907247 +0000 UTC m=+2186.453924404" watchObservedRunningTime="2025-12-12 05:09:55.587746034 +0000 UTC m=+2186.463763191" Dec 12 05:09:59 crc kubenswrapper[4796]: I1212 05:09:59.808239 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:59 crc kubenswrapper[4796]: I1212 05:09:59.808732 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:09:59 crc kubenswrapper[4796]: I1212 05:09:59.863794 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:10:00 crc kubenswrapper[4796]: I1212 05:10:00.641932 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:10:00 crc kubenswrapper[4796]: I1212 05:10:00.691215 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxp8x"] Dec 12 05:10:02 crc kubenswrapper[4796]: I1212 05:10:02.613728 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cxp8x" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="registry-server" containerID="cri-o://f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843" gracePeriod=2 Dec 12 05:10:02 crc kubenswrapper[4796]: I1212 05:10:02.969429 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:10:02 crc kubenswrapper[4796]: I1212 05:10:02.969819 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:10:02 crc kubenswrapper[4796]: I1212 05:10:02.969893 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:10:02 crc kubenswrapper[4796]: I1212 05:10:02.970709 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f11ff95b77ce8e0e105735357ea3bd8d295da60b88a478bd5954d7b8f179d18d"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:10:02 crc kubenswrapper[4796]: I1212 05:10:02.970837 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://f11ff95b77ce8e0e105735357ea3bd8d295da60b88a478bd5954d7b8f179d18d" gracePeriod=600 Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.242642 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.367716 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-utilities\") pod \"a55fff74-7c41-4c44-adfd-884c99e2bf92\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.367752 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfwgw\" (UniqueName: \"kubernetes.io/projected/a55fff74-7c41-4c44-adfd-884c99e2bf92-kube-api-access-mfwgw\") pod \"a55fff74-7c41-4c44-adfd-884c99e2bf92\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.367839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-catalog-content\") pod \"a55fff74-7c41-4c44-adfd-884c99e2bf92\" (UID: \"a55fff74-7c41-4c44-adfd-884c99e2bf92\") " Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.368735 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-utilities" (OuterVolumeSpecName: "utilities") pod "a55fff74-7c41-4c44-adfd-884c99e2bf92" (UID: "a55fff74-7c41-4c44-adfd-884c99e2bf92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.374246 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55fff74-7c41-4c44-adfd-884c99e2bf92-kube-api-access-mfwgw" (OuterVolumeSpecName: "kube-api-access-mfwgw") pod "a55fff74-7c41-4c44-adfd-884c99e2bf92" (UID: "a55fff74-7c41-4c44-adfd-884c99e2bf92"). InnerVolumeSpecName "kube-api-access-mfwgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.425968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a55fff74-7c41-4c44-adfd-884c99e2bf92" (UID: "a55fff74-7c41-4c44-adfd-884c99e2bf92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.469912 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.469943 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfwgw\" (UniqueName: \"kubernetes.io/projected/a55fff74-7c41-4c44-adfd-884c99e2bf92-kube-api-access-mfwgw\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.469953 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55fff74-7c41-4c44-adfd-884c99e2bf92-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.623834 4796 generic.go:334] "Generic (PLEG): container finished" podID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerID="f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843" exitCode=0 Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.623904 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerDied","Data":"f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843"} Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.623934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxp8x" event={"ID":"a55fff74-7c41-4c44-adfd-884c99e2bf92","Type":"ContainerDied","Data":"2b3573e906ac6680c7d5ad43026c378baf3222c589f40210de0ee090c600fe7b"} Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.623957 4796 scope.go:117] "RemoveContainer" containerID="f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.624091 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxp8x" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.638406 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="f11ff95b77ce8e0e105735357ea3bd8d295da60b88a478bd5954d7b8f179d18d" exitCode=0 Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.638457 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"f11ff95b77ce8e0e105735357ea3bd8d295da60b88a478bd5954d7b8f179d18d"} Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.638495 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9"} Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.676881 4796 scope.go:117] "RemoveContainer" containerID="3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.685713 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxp8x"] Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.701332 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cxp8x"] Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.743025 4796 scope.go:117] "RemoveContainer" containerID="f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.811488 4796 scope.go:117] "RemoveContainer" containerID="f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843" Dec 12 05:10:03 crc kubenswrapper[4796]: E1212 05:10:03.812529 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843\": container with ID starting with f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843 not found: ID does not exist" containerID="f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.812597 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843"} err="failed to get container status \"f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843\": rpc error: code = NotFound desc = could not find container \"f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843\": container with ID starting with f7293a1e4bfa3fdc8a79f70e780a1645b942aa0b9298ffdaf4e9dbd8d6629843 not found: ID does not exist" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.812624 4796 scope.go:117] "RemoveContainer" containerID="3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c" Dec 12 05:10:03 crc kubenswrapper[4796]: E1212 05:10:03.812985 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c\": container with ID starting with 3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c not found: ID does not exist" containerID="3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.813015 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c"} err="failed to get container status \"3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c\": rpc error: code = NotFound desc = could not find container \"3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c\": container with ID starting with 3be8399b911a470fee05e01d2bf5e188cd95d400955eb26f35b855fa0a9d649c not found: ID does not exist" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.813035 4796 scope.go:117] "RemoveContainer" containerID="f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d" Dec 12 05:10:03 crc kubenswrapper[4796]: E1212 05:10:03.813977 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d\": container with ID starting with f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d not found: ID does not exist" containerID="f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.814006 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d"} err="failed to get container status \"f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d\": rpc error: code = NotFound desc = could not find container \"f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d\": container with ID starting with f3c2e0c54775bb003e2397943d184c29ab00c783ce6cf958eff678268abf9b5d not found: ID does not exist" Dec 12 05:10:03 crc kubenswrapper[4796]: I1212 05:10:03.814026 4796 scope.go:117] "RemoveContainer" containerID="d430b95ea8dae07373ba9792545fcfd54815a7eab36b3eebde02bcfd94fbf5a5" Dec 12 05:10:05 crc kubenswrapper[4796]: I1212 05:10:05.424011 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" path="/var/lib/kubelet/pods/a55fff74-7c41-4c44-adfd-884c99e2bf92/volumes" Dec 12 05:10:11 crc kubenswrapper[4796]: I1212 05:10:11.711364 4796 generic.go:334] "Generic (PLEG): container finished" podID="60d6d74d-f5f7-43c4-8462-f073926de480" containerID="f1fbba5756f39fb5b0735970544eaeeffa3b5c3a99424a190da673e7c6e7d585" exitCode=0 Dec 12 05:10:11 crc kubenswrapper[4796]: I1212 05:10:11.711496 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" event={"ID":"60d6d74d-f5f7-43c4-8462-f073926de480","Type":"ContainerDied","Data":"f1fbba5756f39fb5b0735970544eaeeffa3b5c3a99424a190da673e7c6e7d585"} Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.154819 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.424532 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60d6d74d-f5f7-43c4-8462-f073926de480-ovncontroller-config-0\") pod \"60d6d74d-f5f7-43c4-8462-f073926de480\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.424592 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ssh-key\") pod \"60d6d74d-f5f7-43c4-8462-f073926de480\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.424613 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ovn-combined-ca-bundle\") pod \"60d6d74d-f5f7-43c4-8462-f073926de480\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.424749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-inventory\") pod \"60d6d74d-f5f7-43c4-8462-f073926de480\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.424791 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkhjh\" (UniqueName: \"kubernetes.io/projected/60d6d74d-f5f7-43c4-8462-f073926de480-kube-api-access-mkhjh\") pod \"60d6d74d-f5f7-43c4-8462-f073926de480\" (UID: \"60d6d74d-f5f7-43c4-8462-f073926de480\") " Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.435442 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "60d6d74d-f5f7-43c4-8462-f073926de480" (UID: "60d6d74d-f5f7-43c4-8462-f073926de480"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.462786 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d6d74d-f5f7-43c4-8462-f073926de480-kube-api-access-mkhjh" (OuterVolumeSpecName: "kube-api-access-mkhjh") pod "60d6d74d-f5f7-43c4-8462-f073926de480" (UID: "60d6d74d-f5f7-43c4-8462-f073926de480"). InnerVolumeSpecName "kube-api-access-mkhjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.470532 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "60d6d74d-f5f7-43c4-8462-f073926de480" (UID: "60d6d74d-f5f7-43c4-8462-f073926de480"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.488924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-inventory" (OuterVolumeSpecName: "inventory") pod "60d6d74d-f5f7-43c4-8462-f073926de480" (UID: "60d6d74d-f5f7-43c4-8462-f073926de480"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.497523 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60d6d74d-f5f7-43c4-8462-f073926de480-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "60d6d74d-f5f7-43c4-8462-f073926de480" (UID: "60d6d74d-f5f7-43c4-8462-f073926de480"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.526207 4796 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/60d6d74d-f5f7-43c4-8462-f073926de480-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.526249 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.526268 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.526305 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d6d74d-f5f7-43c4-8462-f073926de480-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.526349 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkhjh\" (UniqueName: \"kubernetes.io/projected/60d6d74d-f5f7-43c4-8462-f073926de480-kube-api-access-mkhjh\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.730509 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" event={"ID":"60d6d74d-f5f7-43c4-8462-f073926de480","Type":"ContainerDied","Data":"a5579714952a3ef783c7eb325b8fa7f20c17936bc68ecd92742c39332e82d67b"} Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.730866 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5579714952a3ef783c7eb325b8fa7f20c17936bc68ecd92742c39332e82d67b" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.730679 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tkc6x" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.833604 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s"] Dec 12 05:10:13 crc kubenswrapper[4796]: E1212 05:10:13.834113 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="extract-content" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.834139 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="extract-content" Dec 12 05:10:13 crc kubenswrapper[4796]: E1212 05:10:13.834164 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="registry-server" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.834175 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="registry-server" Dec 12 05:10:13 crc kubenswrapper[4796]: E1212 05:10:13.834198 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d6d74d-f5f7-43c4-8462-f073926de480" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.834207 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d6d74d-f5f7-43c4-8462-f073926de480" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 12 05:10:13 crc kubenswrapper[4796]: E1212 05:10:13.834217 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="extract-utilities" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.834228 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="extract-utilities" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.834473 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d6d74d-f5f7-43c4-8462-f073926de480" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.834525 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55fff74-7c41-4c44-adfd-884c99e2bf92" containerName="registry-server" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.835329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.838413 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.838426 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.838636 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.841294 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.841352 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.844002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.850862 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s"] Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.933954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.934023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.934225 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.934557 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.934648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:13 crc kubenswrapper[4796]: I1212 05:10:13.934682 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqt6\" (UniqueName: \"kubernetes.io/projected/177571dd-6d0b-463d-8831-2983eb8a331d-kube-api-access-pwqt6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.035463 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.035599 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.035645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.035666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqt6\" (UniqueName: \"kubernetes.io/projected/177571dd-6d0b-463d-8831-2983eb8a331d-kube-api-access-pwqt6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.035724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.035762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.040243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.040925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.041202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.041932 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.056563 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.058035 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqt6\" (UniqueName: \"kubernetes.io/projected/177571dd-6d0b-463d-8831-2983eb8a331d-kube-api-access-pwqt6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.152623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.699620 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s"] Dec 12 05:10:14 crc kubenswrapper[4796]: I1212 05:10:14.738957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" event={"ID":"177571dd-6d0b-463d-8831-2983eb8a331d","Type":"ContainerStarted","Data":"4ea1d32f70a303404fc385a59eca9f3ba79769d5e28e0b8319f84fc292205276"} Dec 12 05:10:15 crc kubenswrapper[4796]: I1212 05:10:15.751316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" event={"ID":"177571dd-6d0b-463d-8831-2983eb8a331d","Type":"ContainerStarted","Data":"0fe82342fe2ec21349faa5eba6b1040777ee3a662ddec4bf4dd72803778f84f8"} Dec 12 05:10:15 crc kubenswrapper[4796]: I1212 05:10:15.770557 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" podStartSLOduration=2.593774415 podStartE2EDuration="2.770540745s" podCreationTimestamp="2025-12-12 05:10:13 +0000 UTC" firstStartedPulling="2025-12-12 05:10:14.706198845 +0000 UTC m=+2205.582215992" lastFinishedPulling="2025-12-12 05:10:14.882965175 +0000 UTC m=+2205.758982322" observedRunningTime="2025-12-12 05:10:15.768030007 +0000 UTC m=+2206.644047154" watchObservedRunningTime="2025-12-12 05:10:15.770540745 +0000 UTC m=+2206.646557892" Dec 12 05:10:21 crc kubenswrapper[4796]: I1212 05:10:21.953569 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdblz"] Dec 12 05:10:21 crc kubenswrapper[4796]: I1212 05:10:21.956043 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:21 crc kubenswrapper[4796]: I1212 05:10:21.976521 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdblz"] Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.003528 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctvj\" (UniqueName: \"kubernetes.io/projected/5d61959a-5ac5-4933-80ec-199f7ef98c1f-kube-api-access-8ctvj\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.003572 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-utilities\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.003607 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-catalog-content\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.105004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctvj\" (UniqueName: \"kubernetes.io/projected/5d61959a-5ac5-4933-80ec-199f7ef98c1f-kube-api-access-8ctvj\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.105045 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-utilities\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.105071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-catalog-content\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.105689 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-catalog-content\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.105687 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-utilities\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.131357 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctvj\" (UniqueName: \"kubernetes.io/projected/5d61959a-5ac5-4933-80ec-199f7ef98c1f-kube-api-access-8ctvj\") pod \"certified-operators-kdblz\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.291622 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:22 crc kubenswrapper[4796]: I1212 05:10:22.848735 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdblz"] Dec 12 05:10:23 crc kubenswrapper[4796]: I1212 05:10:23.829196 4796 generic.go:334] "Generic (PLEG): container finished" podID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerID="b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc" exitCode=0 Dec 12 05:10:23 crc kubenswrapper[4796]: I1212 05:10:23.829251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerDied","Data":"b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc"} Dec 12 05:10:23 crc kubenswrapper[4796]: I1212 05:10:23.830016 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerStarted","Data":"3bea693e71a3007f3ab97fbd0b57e5c4c93f72b91b5acb349ea361cafac37c88"} Dec 12 05:10:25 crc kubenswrapper[4796]: I1212 05:10:25.849378 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerStarted","Data":"35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64"} Dec 12 05:10:26 crc kubenswrapper[4796]: I1212 05:10:26.862692 4796 generic.go:334] "Generic (PLEG): container finished" podID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerID="35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64" exitCode=0 Dec 12 05:10:26 crc kubenswrapper[4796]: I1212 05:10:26.863050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerDied","Data":"35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64"} Dec 12 05:10:27 crc kubenswrapper[4796]: I1212 05:10:27.876125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerStarted","Data":"bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453"} Dec 12 05:10:27 crc kubenswrapper[4796]: I1212 05:10:27.901539 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdblz" podStartSLOduration=3.300195199 podStartE2EDuration="6.901516336s" podCreationTimestamp="2025-12-12 05:10:21 +0000 UTC" firstStartedPulling="2025-12-12 05:10:23.832384192 +0000 UTC m=+2214.708401349" lastFinishedPulling="2025-12-12 05:10:27.433705309 +0000 UTC m=+2218.309722486" observedRunningTime="2025-12-12 05:10:27.89909325 +0000 UTC m=+2218.775110397" watchObservedRunningTime="2025-12-12 05:10:27.901516336 +0000 UTC m=+2218.777533483" Dec 12 05:10:32 crc kubenswrapper[4796]: I1212 05:10:32.292195 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:32 crc kubenswrapper[4796]: I1212 05:10:32.292696 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:32 crc kubenswrapper[4796]: I1212 05:10:32.340156 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:32 crc kubenswrapper[4796]: I1212 05:10:32.970424 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:33 crc kubenswrapper[4796]: I1212 05:10:33.020924 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdblz"] Dec 12 05:10:34 crc kubenswrapper[4796]: I1212 05:10:34.935535 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdblz" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="registry-server" containerID="cri-o://bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453" gracePeriod=2 Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.430889 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.466816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ctvj\" (UniqueName: \"kubernetes.io/projected/5d61959a-5ac5-4933-80ec-199f7ef98c1f-kube-api-access-8ctvj\") pod \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.467117 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-catalog-content\") pod \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.467197 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-utilities\") pod \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\" (UID: \"5d61959a-5ac5-4933-80ec-199f7ef98c1f\") " Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.468968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-utilities" (OuterVolumeSpecName: "utilities") pod "5d61959a-5ac5-4933-80ec-199f7ef98c1f" (UID: "5d61959a-5ac5-4933-80ec-199f7ef98c1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.483027 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d61959a-5ac5-4933-80ec-199f7ef98c1f-kube-api-access-8ctvj" (OuterVolumeSpecName: "kube-api-access-8ctvj") pod "5d61959a-5ac5-4933-80ec-199f7ef98c1f" (UID: "5d61959a-5ac5-4933-80ec-199f7ef98c1f"). InnerVolumeSpecName "kube-api-access-8ctvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.536124 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d61959a-5ac5-4933-80ec-199f7ef98c1f" (UID: "5d61959a-5ac5-4933-80ec-199f7ef98c1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.571276 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.571343 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d61959a-5ac5-4933-80ec-199f7ef98c1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.571363 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ctvj\" (UniqueName: \"kubernetes.io/projected/5d61959a-5ac5-4933-80ec-199f7ef98c1f-kube-api-access-8ctvj\") on node \"crc\" DevicePath \"\"" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.948967 4796 generic.go:334] "Generic (PLEG): container finished" podID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerID="bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453" exitCode=0 Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.949011 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerDied","Data":"bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453"} Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.949038 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdblz" event={"ID":"5d61959a-5ac5-4933-80ec-199f7ef98c1f","Type":"ContainerDied","Data":"3bea693e71a3007f3ab97fbd0b57e5c4c93f72b91b5acb349ea361cafac37c88"} Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.949055 4796 scope.go:117] "RemoveContainer" containerID="bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.949175 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdblz" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.970591 4796 scope.go:117] "RemoveContainer" containerID="35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64" Dec 12 05:10:35 crc kubenswrapper[4796]: I1212 05:10:35.991093 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdblz"] Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.006134 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdblz"] Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.016421 4796 scope.go:117] "RemoveContainer" containerID="b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc" Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.045892 4796 scope.go:117] "RemoveContainer" containerID="bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453" Dec 12 05:10:36 crc kubenswrapper[4796]: E1212 05:10:36.046699 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453\": container with ID starting with bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453 not found: ID does not exist" containerID="bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453" Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.046729 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453"} err="failed to get container status \"bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453\": rpc error: code = NotFound desc = could not find container \"bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453\": container with ID starting with bac183f724514d40c444c18eb53b448bd96da36040e97bdcb4d5b06a89a7d453 not found: ID does not exist" Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.046750 4796 scope.go:117] "RemoveContainer" containerID="35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64" Dec 12 05:10:36 crc kubenswrapper[4796]: E1212 05:10:36.047027 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64\": container with ID starting with 35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64 not found: ID does not exist" containerID="35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64" Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.047071 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64"} err="failed to get container status \"35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64\": rpc error: code = NotFound desc = could not find container \"35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64\": container with ID starting with 35890bd0723c48b8428afa8b292eb8716f7191b342256d7645a2664a57091f64 not found: ID does not exist" Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.047096 4796 scope.go:117] "RemoveContainer" containerID="b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc" Dec 12 05:10:36 crc kubenswrapper[4796]: E1212 05:10:36.047355 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc\": container with ID starting with b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc not found: ID does not exist" containerID="b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc" Dec 12 05:10:36 crc kubenswrapper[4796]: I1212 05:10:36.047416 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc"} err="failed to get container status \"b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc\": rpc error: code = NotFound desc = could not find container \"b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc\": container with ID starting with b90be774226126690dad0094f9175a1dad10176e43c8ce4fcd611ddb8a52cdfc not found: ID does not exist" Dec 12 05:10:37 crc kubenswrapper[4796]: I1212 05:10:37.427396 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" path="/var/lib/kubelet/pods/5d61959a-5ac5-4933-80ec-199f7ef98c1f/volumes" Dec 12 05:11:08 crc kubenswrapper[4796]: I1212 05:11:08.268300 4796 generic.go:334] "Generic (PLEG): container finished" podID="177571dd-6d0b-463d-8831-2983eb8a331d" containerID="0fe82342fe2ec21349faa5eba6b1040777ee3a662ddec4bf4dd72803778f84f8" exitCode=0 Dec 12 05:11:08 crc kubenswrapper[4796]: I1212 05:11:08.268521 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" event={"ID":"177571dd-6d0b-463d-8831-2983eb8a331d","Type":"ContainerDied","Data":"0fe82342fe2ec21349faa5eba6b1040777ee3a662ddec4bf4dd72803778f84f8"} Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.745392 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.829410 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"177571dd-6d0b-463d-8831-2983eb8a331d\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.829513 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-nova-metadata-neutron-config-0\") pod \"177571dd-6d0b-463d-8831-2983eb8a331d\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.829610 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-inventory\") pod \"177571dd-6d0b-463d-8831-2983eb8a331d\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.829629 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-metadata-combined-ca-bundle\") pod \"177571dd-6d0b-463d-8831-2983eb8a331d\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.829680 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwqt6\" (UniqueName: \"kubernetes.io/projected/177571dd-6d0b-463d-8831-2983eb8a331d-kube-api-access-pwqt6\") pod \"177571dd-6d0b-463d-8831-2983eb8a331d\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.829706 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-ssh-key\") pod \"177571dd-6d0b-463d-8831-2983eb8a331d\" (UID: \"177571dd-6d0b-463d-8831-2983eb8a331d\") " Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.836379 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177571dd-6d0b-463d-8831-2983eb8a331d-kube-api-access-pwqt6" (OuterVolumeSpecName: "kube-api-access-pwqt6") pod "177571dd-6d0b-463d-8831-2983eb8a331d" (UID: "177571dd-6d0b-463d-8831-2983eb8a331d"). InnerVolumeSpecName "kube-api-access-pwqt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.849765 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "177571dd-6d0b-463d-8831-2983eb8a331d" (UID: "177571dd-6d0b-463d-8831-2983eb8a331d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.862969 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "177571dd-6d0b-463d-8831-2983eb8a331d" (UID: "177571dd-6d0b-463d-8831-2983eb8a331d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.863774 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "177571dd-6d0b-463d-8831-2983eb8a331d" (UID: "177571dd-6d0b-463d-8831-2983eb8a331d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.870464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "177571dd-6d0b-463d-8831-2983eb8a331d" (UID: "177571dd-6d0b-463d-8831-2983eb8a331d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.871568 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-inventory" (OuterVolumeSpecName: "inventory") pod "177571dd-6d0b-463d-8831-2983eb8a331d" (UID: "177571dd-6d0b-463d-8831-2983eb8a331d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.931618 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.931648 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.931659 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwqt6\" (UniqueName: \"kubernetes.io/projected/177571dd-6d0b-463d-8831-2983eb8a331d-kube-api-access-pwqt6\") on node \"crc\" DevicePath \"\"" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.931669 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.931677 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:11:09 crc kubenswrapper[4796]: I1212 05:11:09.931712 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/177571dd-6d0b-463d-8831-2983eb8a331d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.288902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" event={"ID":"177571dd-6d0b-463d-8831-2983eb8a331d","Type":"ContainerDied","Data":"4ea1d32f70a303404fc385a59eca9f3ba79769d5e28e0b8319f84fc292205276"} Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.288942 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea1d32f70a303404fc385a59eca9f3ba79769d5e28e0b8319f84fc292205276" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.288944 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.391464 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4"] Dec 12 05:11:10 crc kubenswrapper[4796]: E1212 05:11:10.391927 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="registry-server" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.391942 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="registry-server" Dec 12 05:11:10 crc kubenswrapper[4796]: E1212 05:11:10.391957 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="extract-content" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.391964 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="extract-content" Dec 12 05:11:10 crc kubenswrapper[4796]: E1212 05:11:10.391974 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="extract-utilities" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.391981 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="extract-utilities" Dec 12 05:11:10 crc kubenswrapper[4796]: E1212 05:11:10.391991 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177571dd-6d0b-463d-8831-2983eb8a331d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.392000 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="177571dd-6d0b-463d-8831-2983eb8a331d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.392263 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d61959a-5ac5-4933-80ec-199f7ef98c1f" containerName="registry-server" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.392303 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="177571dd-6d0b-463d-8831-2983eb8a331d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.393037 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.396828 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.397259 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.397457 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.397692 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.397843 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.427023 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4"] Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.441116 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.441162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.441206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrkh\" (UniqueName: \"kubernetes.io/projected/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-kube-api-access-fkrkh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.441565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.441613 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.542735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.542788 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.542843 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.542862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.542883 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrkh\" (UniqueName: \"kubernetes.io/projected/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-kube-api-access-fkrkh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.548298 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.548526 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.548576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.557017 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrkh\" (UniqueName: \"kubernetes.io/projected/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-kube-api-access-fkrkh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.557601 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:10 crc kubenswrapper[4796]: I1212 05:11:10.725767 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:11:11 crc kubenswrapper[4796]: I1212 05:11:11.285938 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4"] Dec 12 05:11:12 crc kubenswrapper[4796]: I1212 05:11:12.317530 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" event={"ID":"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a","Type":"ContainerStarted","Data":"8650293b7c8804e783ab59a227f39bcaeb9dcb7e48ff50ee4d30b7b9c9026f65"} Dec 12 05:11:12 crc kubenswrapper[4796]: I1212 05:11:12.318064 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" event={"ID":"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a","Type":"ContainerStarted","Data":"5bc221f972f236c5bcf8e46f46074a514a5b54dce84eb59d8a7418a9828532d7"} Dec 12 05:11:12 crc kubenswrapper[4796]: I1212 05:11:12.341243 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" podStartSLOduration=1.958781404 podStartE2EDuration="2.341227199s" podCreationTimestamp="2025-12-12 05:11:10 +0000 UTC" firstStartedPulling="2025-12-12 05:11:11.309980375 +0000 UTC m=+2262.185997522" lastFinishedPulling="2025-12-12 05:11:11.69242616 +0000 UTC m=+2262.568443317" observedRunningTime="2025-12-12 05:11:12.334025595 +0000 UTC m=+2263.210042732" watchObservedRunningTime="2025-12-12 05:11:12.341227199 +0000 UTC m=+2263.217244346" Dec 12 05:12:32 crc kubenswrapper[4796]: I1212 05:12:32.969450 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:12:32 crc kubenswrapper[4796]: I1212 05:12:32.970034 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:13:02 crc kubenswrapper[4796]: I1212 05:13:02.969885 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:13:02 crc kubenswrapper[4796]: I1212 05:13:02.970519 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:13:32 crc kubenswrapper[4796]: I1212 05:13:32.970070 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:13:32 crc kubenswrapper[4796]: I1212 05:13:32.972042 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:13:32 crc kubenswrapper[4796]: I1212 05:13:32.972227 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:13:32 crc kubenswrapper[4796]: I1212 05:13:32.973311 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:13:32 crc kubenswrapper[4796]: I1212 05:13:32.973544 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" gracePeriod=600 Dec 12 05:13:33 crc kubenswrapper[4796]: E1212 05:13:33.106426 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:13:33 crc kubenswrapper[4796]: I1212 05:13:33.653442 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" exitCode=0 Dec 12 05:13:33 crc kubenswrapper[4796]: I1212 05:13:33.653463 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9"} Dec 12 05:13:33 crc kubenswrapper[4796]: I1212 05:13:33.653795 4796 scope.go:117] "RemoveContainer" containerID="f11ff95b77ce8e0e105735357ea3bd8d295da60b88a478bd5954d7b8f179d18d" Dec 12 05:13:33 crc kubenswrapper[4796]: I1212 05:13:33.654424 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:13:33 crc kubenswrapper[4796]: E1212 05:13:33.654658 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:13:47 crc kubenswrapper[4796]: I1212 05:13:47.411184 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:13:47 crc kubenswrapper[4796]: E1212 05:13:47.411963 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:13:58 crc kubenswrapper[4796]: I1212 05:13:58.411440 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:13:58 crc kubenswrapper[4796]: E1212 05:13:58.412258 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:14:11 crc kubenswrapper[4796]: I1212 05:14:11.412680 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:14:11 crc kubenswrapper[4796]: E1212 05:14:11.414724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:14:22 crc kubenswrapper[4796]: I1212 05:14:22.411000 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:14:22 crc kubenswrapper[4796]: E1212 05:14:22.411720 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:14:35 crc kubenswrapper[4796]: I1212 05:14:35.411438 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:14:35 crc kubenswrapper[4796]: E1212 05:14:35.412379 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:14:49 crc kubenswrapper[4796]: I1212 05:14:49.416938 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:14:49 crc kubenswrapper[4796]: E1212 05:14:49.417893 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.150694 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd"] Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.152902 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.156055 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.173426 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.174644 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd"] Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.267967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnlcv\" (UniqueName: \"kubernetes.io/projected/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-kube-api-access-dnlcv\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.268384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-secret-volume\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.268536 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-config-volume\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.370879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnlcv\" (UniqueName: \"kubernetes.io/projected/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-kube-api-access-dnlcv\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.370997 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-secret-volume\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.371020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-config-volume\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.371979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-config-volume\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.380950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-secret-volume\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.389212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnlcv\" (UniqueName: \"kubernetes.io/projected/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-kube-api-access-dnlcv\") pod \"collect-profiles-29425275-gtrtd\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.477168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:00 crc kubenswrapper[4796]: I1212 05:15:00.914534 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd"] Dec 12 05:15:01 crc kubenswrapper[4796]: I1212 05:15:01.500103 4796 generic.go:334] "Generic (PLEG): container finished" podID="747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" containerID="87c1bb5837742563e17809c79e9f3ab1db9b40a97239867f95257fbe00b817cd" exitCode=0 Dec 12 05:15:01 crc kubenswrapper[4796]: I1212 05:15:01.500528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" event={"ID":"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2","Type":"ContainerDied","Data":"87c1bb5837742563e17809c79e9f3ab1db9b40a97239867f95257fbe00b817cd"} Dec 12 05:15:01 crc kubenswrapper[4796]: I1212 05:15:01.500569 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" event={"ID":"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2","Type":"ContainerStarted","Data":"8bcc554b270c551b2434b9f3326fde82bfd6d862e32d51602e45e8c28b9964bd"} Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.411797 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:15:02 crc kubenswrapper[4796]: E1212 05:15:02.412613 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.826014 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.924410 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-config-volume\") pod \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.924931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-secret-volume\") pod \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.925098 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnlcv\" (UniqueName: \"kubernetes.io/projected/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-kube-api-access-dnlcv\") pod \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\" (UID: \"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2\") " Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.925446 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-config-volume" (OuterVolumeSpecName: "config-volume") pod "747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" (UID: "747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.926348 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.930138 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-kube-api-access-dnlcv" (OuterVolumeSpecName: "kube-api-access-dnlcv") pod "747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" (UID: "747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2"). InnerVolumeSpecName "kube-api-access-dnlcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:15:02 crc kubenswrapper[4796]: I1212 05:15:02.930472 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" (UID: "747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.027931 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.027962 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnlcv\" (UniqueName: \"kubernetes.io/projected/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2-kube-api-access-dnlcv\") on node \"crc\" DevicePath \"\"" Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.518031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" event={"ID":"747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2","Type":"ContainerDied","Data":"8bcc554b270c551b2434b9f3326fde82bfd6d862e32d51602e45e8c28b9964bd"} Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.518581 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bcc554b270c551b2434b9f3326fde82bfd6d862e32d51602e45e8c28b9964bd" Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.518111 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd" Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.904386 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg"] Dec 12 05:15:03 crc kubenswrapper[4796]: I1212 05:15:03.918956 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425230-zm2tg"] Dec 12 05:15:05 crc kubenswrapper[4796]: I1212 05:15:05.424522 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6adaae06-a6aa-4040-9a14-5490cd58b1d9" path="/var/lib/kubelet/pods/6adaae06-a6aa-4040-9a14-5490cd58b1d9/volumes" Dec 12 05:15:15 crc kubenswrapper[4796]: I1212 05:15:15.411998 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:15:15 crc kubenswrapper[4796]: E1212 05:15:15.412730 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:15:28 crc kubenswrapper[4796]: I1212 05:15:28.411524 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:15:28 crc kubenswrapper[4796]: E1212 05:15:28.412369 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:15:39 crc kubenswrapper[4796]: I1212 05:15:39.418167 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:15:39 crc kubenswrapper[4796]: E1212 05:15:39.419241 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:15:50 crc kubenswrapper[4796]: I1212 05:15:50.411348 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:15:50 crc kubenswrapper[4796]: E1212 05:15:50.412037 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:16:02 crc kubenswrapper[4796]: I1212 05:16:02.992514 4796 scope.go:117] "RemoveContainer" containerID="47d717eca729cfee6d5471aee80bab56099571d782bc23a56b5add2eea830d5d" Dec 12 05:16:03 crc kubenswrapper[4796]: I1212 05:16:03.411315 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:16:03 crc kubenswrapper[4796]: E1212 05:16:03.411810 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:16:08 crc kubenswrapper[4796]: I1212 05:16:08.119729 4796 generic.go:334] "Generic (PLEG): container finished" podID="61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" containerID="8650293b7c8804e783ab59a227f39bcaeb9dcb7e48ff50ee4d30b7b9c9026f65" exitCode=0 Dec 12 05:16:08 crc kubenswrapper[4796]: I1212 05:16:08.119807 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" event={"ID":"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a","Type":"ContainerDied","Data":"8650293b7c8804e783ab59a227f39bcaeb9dcb7e48ff50ee4d30b7b9c9026f65"} Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.502498 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.560533 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-inventory\") pod \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.560641 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-secret-0\") pod \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.560684 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrkh\" (UniqueName: \"kubernetes.io/projected/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-kube-api-access-fkrkh\") pod \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.560726 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-combined-ca-bundle\") pod \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.560866 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-ssh-key\") pod \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\" (UID: \"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a\") " Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.566858 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-kube-api-access-fkrkh" (OuterVolumeSpecName: "kube-api-access-fkrkh") pod "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" (UID: "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a"). InnerVolumeSpecName "kube-api-access-fkrkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.567192 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" (UID: "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.590506 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" (UID: "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.593053 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" (UID: "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.597462 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-inventory" (OuterVolumeSpecName: "inventory") pod "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" (UID: "61d49bcc-8f04-4fc8-8f61-70e5cc450c5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.663101 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.663134 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.663145 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkrkh\" (UniqueName: \"kubernetes.io/projected/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-kube-api-access-fkrkh\") on node \"crc\" DevicePath \"\"" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.663156 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:16:09 crc kubenswrapper[4796]: I1212 05:16:09.663164 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61d49bcc-8f04-4fc8-8f61-70e5cc450c5a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.138504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" event={"ID":"61d49bcc-8f04-4fc8-8f61-70e5cc450c5a","Type":"ContainerDied","Data":"5bc221f972f236c5bcf8e46f46074a514a5b54dce84eb59d8a7418a9828532d7"} Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.139029 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc221f972f236c5bcf8e46f46074a514a5b54dce84eb59d8a7418a9828532d7" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.138558 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.236429 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279"] Dec 12 05:16:10 crc kubenswrapper[4796]: E1212 05:16:10.236927 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" containerName="collect-profiles" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.236948 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" containerName="collect-profiles" Dec 12 05:16:10 crc kubenswrapper[4796]: E1212 05:16:10.236993 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.237002 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.237224 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" containerName="collect-profiles" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.237281 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d49bcc-8f04-4fc8-8f61-70e5cc450c5a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.238165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.243474 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.244792 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.244996 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.245157 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.245526 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.245708 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.245852 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.251681 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279"] Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.283059 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.283333 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.283491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.283607 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thm44\" (UniqueName: \"kubernetes.io/projected/a4dc653f-0e4f-4c95-a71a-c96d4419f484-kube-api-access-thm44\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.283752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.283912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.284031 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.284135 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.284395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386429 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thm44\" (UniqueName: \"kubernetes.io/projected/a4dc653f-0e4f-4c95-a71a-c96d4419f484-kube-api-access-thm44\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386588 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386688 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.386842 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.389148 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.391011 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.392904 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.394079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.395714 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.395837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.396272 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.396523 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.416141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thm44\" (UniqueName: \"kubernetes.io/projected/a4dc653f-0e4f-4c95-a71a-c96d4419f484-kube-api-access-thm44\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rl279\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:10 crc kubenswrapper[4796]: I1212 05:16:10.557162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:16:11 crc kubenswrapper[4796]: I1212 05:16:11.139984 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279"] Dec 12 05:16:11 crc kubenswrapper[4796]: I1212 05:16:11.146050 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:16:11 crc kubenswrapper[4796]: I1212 05:16:11.161575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" event={"ID":"a4dc653f-0e4f-4c95-a71a-c96d4419f484","Type":"ContainerStarted","Data":"3f36eba7dbf98775b68e4299a49b21d5447ba86ba373c3b8f609d284e9230623"} Dec 12 05:16:12 crc kubenswrapper[4796]: I1212 05:16:12.171333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" event={"ID":"a4dc653f-0e4f-4c95-a71a-c96d4419f484","Type":"ContainerStarted","Data":"f2133a1e7e085508428ec60821fd468c26abad0d329118755295673ffa2c64e3"} Dec 12 05:16:16 crc kubenswrapper[4796]: I1212 05:16:16.411276 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:16:16 crc kubenswrapper[4796]: E1212 05:16:16.412113 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:16:27 crc kubenswrapper[4796]: I1212 05:16:27.411673 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:16:27 crc kubenswrapper[4796]: E1212 05:16:27.412339 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:16:42 crc kubenswrapper[4796]: I1212 05:16:42.411865 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:16:42 crc kubenswrapper[4796]: E1212 05:16:42.412984 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:16:56 crc kubenswrapper[4796]: I1212 05:16:56.411424 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:16:56 crc kubenswrapper[4796]: E1212 05:16:56.412431 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:17:09 crc kubenswrapper[4796]: I1212 05:17:09.419960 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:17:09 crc kubenswrapper[4796]: E1212 05:17:09.421002 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:17:21 crc kubenswrapper[4796]: I1212 05:17:21.412654 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:17:21 crc kubenswrapper[4796]: E1212 05:17:21.413456 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:17:34 crc kubenswrapper[4796]: I1212 05:17:34.411514 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:17:34 crc kubenswrapper[4796]: E1212 05:17:34.413640 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:17:46 crc kubenswrapper[4796]: I1212 05:17:46.412136 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:17:46 crc kubenswrapper[4796]: E1212 05:17:46.413102 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:17:58 crc kubenswrapper[4796]: I1212 05:17:58.419694 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:17:58 crc kubenswrapper[4796]: E1212 05:17:58.420598 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:18:11 crc kubenswrapper[4796]: I1212 05:18:11.411953 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:18:11 crc kubenswrapper[4796]: E1212 05:18:11.412648 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:18:25 crc kubenswrapper[4796]: I1212 05:18:25.411457 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:18:25 crc kubenswrapper[4796]: E1212 05:18:25.412391 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:18:40 crc kubenswrapper[4796]: I1212 05:18:40.411038 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:18:41 crc kubenswrapper[4796]: I1212 05:18:41.502846 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"cb81b90ccdfc895836986a68363825d029bee91243137cce996dc96ac7b86f48"} Dec 12 05:18:41 crc kubenswrapper[4796]: I1212 05:18:41.530796 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" podStartSLOduration=151.315656819 podStartE2EDuration="2m31.530779235s" podCreationTimestamp="2025-12-12 05:16:10 +0000 UTC" firstStartedPulling="2025-12-12 05:16:11.14572799 +0000 UTC m=+2562.021745137" lastFinishedPulling="2025-12-12 05:16:11.360850406 +0000 UTC m=+2562.236867553" observedRunningTime="2025-12-12 05:16:12.191816899 +0000 UTC m=+2563.067834046" watchObservedRunningTime="2025-12-12 05:18:41.530779235 +0000 UTC m=+2712.406796382" Dec 12 05:18:43 crc kubenswrapper[4796]: I1212 05:18:43.583966 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 05:19:01 crc kubenswrapper[4796]: I1212 05:19:01.673779 4796 generic.go:334] "Generic (PLEG): container finished" podID="a4dc653f-0e4f-4c95-a71a-c96d4419f484" containerID="f2133a1e7e085508428ec60821fd468c26abad0d329118755295673ffa2c64e3" exitCode=0 Dec 12 05:19:01 crc kubenswrapper[4796]: I1212 05:19:01.673840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" event={"ID":"a4dc653f-0e4f-4c95-a71a-c96d4419f484","Type":"ContainerDied","Data":"f2133a1e7e085508428ec60821fd468c26abad0d329118755295673ffa2c64e3"} Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.155724 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354255 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thm44\" (UniqueName: \"kubernetes.io/projected/a4dc653f-0e4f-4c95-a71a-c96d4419f484-kube-api-access-thm44\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-1\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-ssh-key\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354496 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-0\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354606 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-combined-ca-bundle\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-1\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354760 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-extra-config-0\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354796 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-0\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.354839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-inventory\") pod \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\" (UID: \"a4dc653f-0e4f-4c95-a71a-c96d4419f484\") " Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.363316 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4dc653f-0e4f-4c95-a71a-c96d4419f484-kube-api-access-thm44" (OuterVolumeSpecName: "kube-api-access-thm44") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "kube-api-access-thm44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.378331 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.388701 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-inventory" (OuterVolumeSpecName: "inventory") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.391541 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.392751 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.399500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.402322 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.409572 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.416241 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "a4dc653f-0e4f-4c95-a71a-c96d4419f484" (UID: "a4dc653f-0e4f-4c95-a71a-c96d4419f484"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459618 4796 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459648 4796 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459659 4796 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459672 4796 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459683 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459692 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thm44\" (UniqueName: \"kubernetes.io/projected/a4dc653f-0e4f-4c95-a71a-c96d4419f484-kube-api-access-thm44\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459700 4796 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459711 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.459722 4796 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a4dc653f-0e4f-4c95-a71a-c96d4419f484-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.699239 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" event={"ID":"a4dc653f-0e4f-4c95-a71a-c96d4419f484","Type":"ContainerDied","Data":"3f36eba7dbf98775b68e4299a49b21d5447ba86ba373c3b8f609d284e9230623"} Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.699305 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f36eba7dbf98775b68e4299a49b21d5447ba86ba373c3b8f609d284e9230623" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.699381 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rl279" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.841466 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6"] Dec 12 05:19:03 crc kubenswrapper[4796]: E1212 05:19:03.841994 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dc653f-0e4f-4c95-a71a-c96d4419f484" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.842015 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dc653f-0e4f-4c95-a71a-c96d4419f484" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.842272 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dc653f-0e4f-4c95-a71a-c96d4419f484" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.845114 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.848897 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.849475 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.849472 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.852081 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.854842 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rn8cp" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.856755 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6"] Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrd54\" (UniqueName: \"kubernetes.io/projected/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-kube-api-access-wrd54\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972455 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972532 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972582 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:03 crc kubenswrapper[4796]: I1212 05:19:03.972648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.074656 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.074963 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.075001 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.075069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.075097 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd54\" (UniqueName: \"kubernetes.io/projected/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-kube-api-access-wrd54\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.075128 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.075185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.080964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.082769 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.083424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.085707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.088319 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.092068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.095483 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd54\" (UniqueName: \"kubernetes.io/projected/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-kube-api-access-wrd54\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-phst6\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.165933 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:19:04 crc kubenswrapper[4796]: W1212 05:19:04.755366 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e68ee3_5e0b_4748_a03f_9c4d226b690c.slice/crio-ecafdc7cdc2dae7678066e7130ab40263b63a5ed0b3c56b4166c9ed8b25c8222 WatchSource:0}: Error finding container ecafdc7cdc2dae7678066e7130ab40263b63a5ed0b3c56b4166c9ed8b25c8222: Status 404 returned error can't find the container with id ecafdc7cdc2dae7678066e7130ab40263b63a5ed0b3c56b4166c9ed8b25c8222 Dec 12 05:19:04 crc kubenswrapper[4796]: I1212 05:19:04.759949 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6"] Dec 12 05:19:05 crc kubenswrapper[4796]: I1212 05:19:05.721404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" event={"ID":"e3e68ee3-5e0b-4748-a03f-9c4d226b690c","Type":"ContainerStarted","Data":"3f59c53192feb06d29b86fda054a7ceaae2cd456b689a92e82351424d98e26a3"} Dec 12 05:19:05 crc kubenswrapper[4796]: I1212 05:19:05.722424 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" event={"ID":"e3e68ee3-5e0b-4748-a03f-9c4d226b690c","Type":"ContainerStarted","Data":"ecafdc7cdc2dae7678066e7130ab40263b63a5ed0b3c56b4166c9ed8b25c8222"} Dec 12 05:19:05 crc kubenswrapper[4796]: I1212 05:19:05.748451 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" podStartSLOduration=2.492329808 podStartE2EDuration="2.748429973s" podCreationTimestamp="2025-12-12 05:19:03 +0000 UTC" firstStartedPulling="2025-12-12 05:19:04.758467218 +0000 UTC m=+2735.634484365" lastFinishedPulling="2025-12-12 05:19:05.014567373 +0000 UTC m=+2735.890584530" observedRunningTime="2025-12-12 05:19:05.735849788 +0000 UTC m=+2736.611866935" watchObservedRunningTime="2025-12-12 05:19:05.748429973 +0000 UTC m=+2736.624447120" Dec 12 05:20:54 crc kubenswrapper[4796]: I1212 05:20:54.872059 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8tjl"] Dec 12 05:20:54 crc kubenswrapper[4796]: I1212 05:20:54.874861 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:54 crc kubenswrapper[4796]: I1212 05:20:54.886575 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8tjl"] Dec 12 05:20:54 crc kubenswrapper[4796]: I1212 05:20:54.995109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzfv\" (UniqueName: \"kubernetes.io/projected/0b628baa-16d4-4479-86e0-13341beea112-kube-api-access-7bzfv\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:54 crc kubenswrapper[4796]: I1212 05:20:54.995203 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-catalog-content\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:54 crc kubenswrapper[4796]: I1212 05:20:54.995291 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-utilities\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.097500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-catalog-content\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.097634 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-utilities\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.097731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzfv\" (UniqueName: \"kubernetes.io/projected/0b628baa-16d4-4479-86e0-13341beea112-kube-api-access-7bzfv\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.097930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-catalog-content\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.098024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-utilities\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.120642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzfv\" (UniqueName: \"kubernetes.io/projected/0b628baa-16d4-4479-86e0-13341beea112-kube-api-access-7bzfv\") pod \"community-operators-q8tjl\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.193187 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.856704 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8tjl"] Dec 12 05:20:55 crc kubenswrapper[4796]: I1212 05:20:55.907697 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerStarted","Data":"475ea3d92fdfe73ef6c17e3afcc55602159891009b297a190f9217d5d156d084"} Dec 12 05:20:56 crc kubenswrapper[4796]: I1212 05:20:56.917507 4796 generic.go:334] "Generic (PLEG): container finished" podID="0b628baa-16d4-4479-86e0-13341beea112" containerID="14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae" exitCode=0 Dec 12 05:20:56 crc kubenswrapper[4796]: I1212 05:20:56.917551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerDied","Data":"14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae"} Dec 12 05:20:57 crc kubenswrapper[4796]: I1212 05:20:57.928150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerStarted","Data":"c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188"} Dec 12 05:20:58 crc kubenswrapper[4796]: I1212 05:20:58.937836 4796 generic.go:334] "Generic (PLEG): container finished" podID="0b628baa-16d4-4479-86e0-13341beea112" containerID="c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188" exitCode=0 Dec 12 05:20:58 crc kubenswrapper[4796]: I1212 05:20:58.938012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerDied","Data":"c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188"} Dec 12 05:20:59 crc kubenswrapper[4796]: I1212 05:20:59.950474 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerStarted","Data":"469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e"} Dec 12 05:20:59 crc kubenswrapper[4796]: I1212 05:20:59.972486 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8tjl" podStartSLOduration=3.4914203759999998 podStartE2EDuration="5.972469278s" podCreationTimestamp="2025-12-12 05:20:54 +0000 UTC" firstStartedPulling="2025-12-12 05:20:56.919514006 +0000 UTC m=+2847.795531153" lastFinishedPulling="2025-12-12 05:20:59.400562908 +0000 UTC m=+2850.276580055" observedRunningTime="2025-12-12 05:20:59.965970053 +0000 UTC m=+2850.841987200" watchObservedRunningTime="2025-12-12 05:20:59.972469278 +0000 UTC m=+2850.848486425" Dec 12 05:21:02 crc kubenswrapper[4796]: I1212 05:21:02.970098 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:21:02 crc kubenswrapper[4796]: I1212 05:21:02.970670 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:21:05 crc kubenswrapper[4796]: I1212 05:21:05.194138 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:21:05 crc kubenswrapper[4796]: I1212 05:21:05.194450 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:21:05 crc kubenswrapper[4796]: I1212 05:21:05.240611 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:21:06 crc kubenswrapper[4796]: I1212 05:21:06.051649 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:21:06 crc kubenswrapper[4796]: I1212 05:21:06.103107 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8tjl"] Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.027977 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8tjl" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="registry-server" containerID="cri-o://469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e" gracePeriod=2 Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.484091 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.712596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bzfv\" (UniqueName: \"kubernetes.io/projected/0b628baa-16d4-4479-86e0-13341beea112-kube-api-access-7bzfv\") pod \"0b628baa-16d4-4479-86e0-13341beea112\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.712776 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-catalog-content\") pod \"0b628baa-16d4-4479-86e0-13341beea112\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.712895 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-utilities\") pod \"0b628baa-16d4-4479-86e0-13341beea112\" (UID: \"0b628baa-16d4-4479-86e0-13341beea112\") " Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.714008 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-utilities" (OuterVolumeSpecName: "utilities") pod "0b628baa-16d4-4479-86e0-13341beea112" (UID: "0b628baa-16d4-4479-86e0-13341beea112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.727540 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b628baa-16d4-4479-86e0-13341beea112-kube-api-access-7bzfv" (OuterVolumeSpecName: "kube-api-access-7bzfv") pod "0b628baa-16d4-4479-86e0-13341beea112" (UID: "0b628baa-16d4-4479-86e0-13341beea112"). InnerVolumeSpecName "kube-api-access-7bzfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.781119 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b628baa-16d4-4479-86e0-13341beea112" (UID: "0b628baa-16d4-4479-86e0-13341beea112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.814875 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.814908 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bzfv\" (UniqueName: \"kubernetes.io/projected/0b628baa-16d4-4479-86e0-13341beea112-kube-api-access-7bzfv\") on node \"crc\" DevicePath \"\"" Dec 12 05:21:08 crc kubenswrapper[4796]: I1212 05:21:08.814919 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b628baa-16d4-4479-86e0-13341beea112-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.036753 4796 generic.go:334] "Generic (PLEG): container finished" podID="0b628baa-16d4-4479-86e0-13341beea112" containerID="469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e" exitCode=0 Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.036791 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerDied","Data":"469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e"} Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.036815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tjl" event={"ID":"0b628baa-16d4-4479-86e0-13341beea112","Type":"ContainerDied","Data":"475ea3d92fdfe73ef6c17e3afcc55602159891009b297a190f9217d5d156d084"} Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.036834 4796 scope.go:117] "RemoveContainer" containerID="469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.036952 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tjl" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.072202 4796 scope.go:117] "RemoveContainer" containerID="c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.077383 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8tjl"] Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.085427 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8tjl"] Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.109601 4796 scope.go:117] "RemoveContainer" containerID="14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.151634 4796 scope.go:117] "RemoveContainer" containerID="469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e" Dec 12 05:21:09 crc kubenswrapper[4796]: E1212 05:21:09.152132 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e\": container with ID starting with 469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e not found: ID does not exist" containerID="469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.152180 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e"} err="failed to get container status \"469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e\": rpc error: code = NotFound desc = could not find container \"469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e\": container with ID starting with 469f7fe7a94cde8604a64c2c472f530ae7120e21cc008dfcd3b7800d9d59bc4e not found: ID does not exist" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.152210 4796 scope.go:117] "RemoveContainer" containerID="c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188" Dec 12 05:21:09 crc kubenswrapper[4796]: E1212 05:21:09.152534 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188\": container with ID starting with c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188 not found: ID does not exist" containerID="c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.152568 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188"} err="failed to get container status \"c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188\": rpc error: code = NotFound desc = could not find container \"c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188\": container with ID starting with c9822f5a054a07468c3a1188720279283238fbf182882148ddaf8b5f52018188 not found: ID does not exist" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.152589 4796 scope.go:117] "RemoveContainer" containerID="14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae" Dec 12 05:21:09 crc kubenswrapper[4796]: E1212 05:21:09.152828 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae\": container with ID starting with 14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae not found: ID does not exist" containerID="14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.152852 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae"} err="failed to get container status \"14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae\": rpc error: code = NotFound desc = could not find container \"14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae\": container with ID starting with 14871161ab4f5408745e1b815a1ad8422650e75bc1a63316dd1a2dacdd93c6ae not found: ID does not exist" Dec 12 05:21:09 crc kubenswrapper[4796]: I1212 05:21:09.422494 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b628baa-16d4-4479-86e0-13341beea112" path="/var/lib/kubelet/pods/0b628baa-16d4-4479-86e0-13341beea112/volumes" Dec 12 05:21:32 crc kubenswrapper[4796]: I1212 05:21:32.969753 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:21:32 crc kubenswrapper[4796]: I1212 05:21:32.970216 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.472145 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-knc7t"] Dec 12 05:21:36 crc kubenswrapper[4796]: E1212 05:21:36.472861 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="extract-utilities" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.472877 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="extract-utilities" Dec 12 05:21:36 crc kubenswrapper[4796]: E1212 05:21:36.472900 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="registry-server" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.472908 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="registry-server" Dec 12 05:21:36 crc kubenswrapper[4796]: E1212 05:21:36.472927 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="extract-content" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.472934 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="extract-content" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.473183 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b628baa-16d4-4479-86e0-13341beea112" containerName="registry-server" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.475096 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.500465 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8z4\" (UniqueName: \"kubernetes.io/projected/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-kube-api-access-vz8z4\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.502439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-utilities\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.502668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-catalog-content\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.539524 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knc7t"] Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.605327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-utilities\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.605388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-catalog-content\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.605491 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz8z4\" (UniqueName: \"kubernetes.io/projected/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-kube-api-access-vz8z4\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.606023 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-utilities\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.606050 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-catalog-content\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.624402 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz8z4\" (UniqueName: \"kubernetes.io/projected/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-kube-api-access-vz8z4\") pod \"certified-operators-knc7t\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:36 crc kubenswrapper[4796]: I1212 05:21:36.805179 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:37 crc kubenswrapper[4796]: W1212 05:21:37.353930 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f5f182d_e1d8_4b58_bee1_f8864fcbc0b6.slice/crio-fc10628e41af3bc67e6bf74c5734042fc5349bd45fd18c2c2f86d680378c1871 WatchSource:0}: Error finding container fc10628e41af3bc67e6bf74c5734042fc5349bd45fd18c2c2f86d680378c1871: Status 404 returned error can't find the container with id fc10628e41af3bc67e6bf74c5734042fc5349bd45fd18c2c2f86d680378c1871 Dec 12 05:21:37 crc kubenswrapper[4796]: I1212 05:21:37.377898 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knc7t"] Dec 12 05:21:37 crc kubenswrapper[4796]: I1212 05:21:37.407434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerStarted","Data":"fc10628e41af3bc67e6bf74c5734042fc5349bd45fd18c2c2f86d680378c1871"} Dec 12 05:21:38 crc kubenswrapper[4796]: I1212 05:21:38.419796 4796 generic.go:334] "Generic (PLEG): container finished" podID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerID="1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53" exitCode=0 Dec 12 05:21:38 crc kubenswrapper[4796]: I1212 05:21:38.420099 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerDied","Data":"1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53"} Dec 12 05:21:38 crc kubenswrapper[4796]: I1212 05:21:38.422227 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:21:39 crc kubenswrapper[4796]: I1212 05:21:39.429744 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerStarted","Data":"bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f"} Dec 12 05:21:40 crc kubenswrapper[4796]: I1212 05:21:40.440421 4796 generic.go:334] "Generic (PLEG): container finished" podID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerID="bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f" exitCode=0 Dec 12 05:21:40 crc kubenswrapper[4796]: I1212 05:21:40.440473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerDied","Data":"bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f"} Dec 12 05:21:41 crc kubenswrapper[4796]: I1212 05:21:41.451410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerStarted","Data":"f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8"} Dec 12 05:21:41 crc kubenswrapper[4796]: I1212 05:21:41.476156 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-knc7t" podStartSLOduration=2.945739064 podStartE2EDuration="5.47614138s" podCreationTimestamp="2025-12-12 05:21:36 +0000 UTC" firstStartedPulling="2025-12-12 05:21:38.421966749 +0000 UTC m=+2889.297983906" lastFinishedPulling="2025-12-12 05:21:40.952369065 +0000 UTC m=+2891.828386222" observedRunningTime="2025-12-12 05:21:41.470900665 +0000 UTC m=+2892.346917822" watchObservedRunningTime="2025-12-12 05:21:41.47614138 +0000 UTC m=+2892.352158527" Dec 12 05:21:46 crc kubenswrapper[4796]: I1212 05:21:46.805617 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:46 crc kubenswrapper[4796]: I1212 05:21:46.806203 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:46 crc kubenswrapper[4796]: I1212 05:21:46.854407 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:47 crc kubenswrapper[4796]: I1212 05:21:47.591339 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:47 crc kubenswrapper[4796]: I1212 05:21:47.646764 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knc7t"] Dec 12 05:21:49 crc kubenswrapper[4796]: I1212 05:21:49.537661 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-knc7t" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="registry-server" containerID="cri-o://f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8" gracePeriod=2 Dec 12 05:21:49 crc kubenswrapper[4796]: I1212 05:21:49.960167 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.107757 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-catalog-content\") pod \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.107831 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz8z4\" (UniqueName: \"kubernetes.io/projected/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-kube-api-access-vz8z4\") pod \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.107937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-utilities\") pod \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\" (UID: \"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6\") " Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.109422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-utilities" (OuterVolumeSpecName: "utilities") pod "8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" (UID: "8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.115196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-kube-api-access-vz8z4" (OuterVolumeSpecName: "kube-api-access-vz8z4") pod "8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" (UID: "8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6"). InnerVolumeSpecName "kube-api-access-vz8z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.196910 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" (UID: "8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.210682 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.210710 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.210721 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz8z4\" (UniqueName: \"kubernetes.io/projected/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6-kube-api-access-vz8z4\") on node \"crc\" DevicePath \"\"" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.549709 4796 generic.go:334] "Generic (PLEG): container finished" podID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerID="f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8" exitCode=0 Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.549743 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knc7t" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.549763 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerDied","Data":"f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8"} Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.550406 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knc7t" event={"ID":"8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6","Type":"ContainerDied","Data":"fc10628e41af3bc67e6bf74c5734042fc5349bd45fd18c2c2f86d680378c1871"} Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.550427 4796 scope.go:117] "RemoveContainer" containerID="f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.574207 4796 scope.go:117] "RemoveContainer" containerID="bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.591903 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knc7t"] Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.605683 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-knc7t"] Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.618464 4796 scope.go:117] "RemoveContainer" containerID="1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.648786 4796 scope.go:117] "RemoveContainer" containerID="f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8" Dec 12 05:21:50 crc kubenswrapper[4796]: E1212 05:21:50.649255 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8\": container with ID starting with f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8 not found: ID does not exist" containerID="f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.649312 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8"} err="failed to get container status \"f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8\": rpc error: code = NotFound desc = could not find container \"f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8\": container with ID starting with f02b863b7fdbc252afc17551959a462d8169739a99839e5fe2f753fb032e33e8 not found: ID does not exist" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.649336 4796 scope.go:117] "RemoveContainer" containerID="bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f" Dec 12 05:21:50 crc kubenswrapper[4796]: E1212 05:21:50.649573 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f\": container with ID starting with bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f not found: ID does not exist" containerID="bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.649598 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f"} err="failed to get container status \"bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f\": rpc error: code = NotFound desc = could not find container \"bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f\": container with ID starting with bba9df2d87731354cef6fc4bbdf3521d16a1391934aec0d850a81bfe93288e0f not found: ID does not exist" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.649618 4796 scope.go:117] "RemoveContainer" containerID="1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53" Dec 12 05:21:50 crc kubenswrapper[4796]: E1212 05:21:50.649897 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53\": container with ID starting with 1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53 not found: ID does not exist" containerID="1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53" Dec 12 05:21:50 crc kubenswrapper[4796]: I1212 05:21:50.649920 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53"} err="failed to get container status \"1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53\": rpc error: code = NotFound desc = could not find container \"1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53\": container with ID starting with 1d149a74ae4680f18bc6a82883863d1220736450ae07d4790baff7548d62ae53 not found: ID does not exist" Dec 12 05:21:51 crc kubenswrapper[4796]: I1212 05:21:51.424539 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" path="/var/lib/kubelet/pods/8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6/volumes" Dec 12 05:22:02 crc kubenswrapper[4796]: I1212 05:22:02.969874 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:22:02 crc kubenswrapper[4796]: I1212 05:22:02.972360 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:22:02 crc kubenswrapper[4796]: I1212 05:22:02.972689 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:22:02 crc kubenswrapper[4796]: I1212 05:22:02.973987 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb81b90ccdfc895836986a68363825d029bee91243137cce996dc96ac7b86f48"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:22:02 crc kubenswrapper[4796]: I1212 05:22:02.974303 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://cb81b90ccdfc895836986a68363825d029bee91243137cce996dc96ac7b86f48" gracePeriod=600 Dec 12 05:22:03 crc kubenswrapper[4796]: I1212 05:22:03.686988 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="cb81b90ccdfc895836986a68363825d029bee91243137cce996dc96ac7b86f48" exitCode=0 Dec 12 05:22:03 crc kubenswrapper[4796]: I1212 05:22:03.687603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"cb81b90ccdfc895836986a68363825d029bee91243137cce996dc96ac7b86f48"} Dec 12 05:22:03 crc kubenswrapper[4796]: I1212 05:22:03.687632 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4"} Dec 12 05:22:03 crc kubenswrapper[4796]: I1212 05:22:03.687652 4796 scope.go:117] "RemoveContainer" containerID="682cfbb3e046793d4e85150f842e3d88ba1bd0dde56d75f4ab1552c1758e9aa9" Dec 12 05:22:27 crc kubenswrapper[4796]: I1212 05:22:27.936760 4796 generic.go:334] "Generic (PLEG): container finished" podID="e3e68ee3-5e0b-4748-a03f-9c4d226b690c" containerID="3f59c53192feb06d29b86fda054a7ceaae2cd456b689a92e82351424d98e26a3" exitCode=0 Dec 12 05:22:27 crc kubenswrapper[4796]: I1212 05:22:27.936873 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" event={"ID":"e3e68ee3-5e0b-4748-a03f-9c4d226b690c","Type":"ContainerDied","Data":"3f59c53192feb06d29b86fda054a7ceaae2cd456b689a92e82351424d98e26a3"} Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.446040 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.529406 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-inventory\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.530141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-2\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.530189 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrd54\" (UniqueName: \"kubernetes.io/projected/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-kube-api-access-wrd54\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.530323 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-telemetry-combined-ca-bundle\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.530352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-1\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.530370 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ssh-key\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.530403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-0\") pod \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\" (UID: \"e3e68ee3-5e0b-4748-a03f-9c4d226b690c\") " Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.537032 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.542742 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-kube-api-access-wrd54" (OuterVolumeSpecName: "kube-api-access-wrd54") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "kube-api-access-wrd54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.557744 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.558892 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.576995 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-inventory" (OuterVolumeSpecName: "inventory") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.579691 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.585653 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e3e68ee3-5e0b-4748-a03f-9c4d226b690c" (UID: "e3e68ee3-5e0b-4748-a03f-9c4d226b690c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632406 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632439 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrd54\" (UniqueName: \"kubernetes.io/projected/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-kube-api-access-wrd54\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632449 4796 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632458 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632468 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632476 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.632487 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e68ee3-5e0b-4748-a03f-9c4d226b690c-inventory\") on node \"crc\" DevicePath \"\"" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.960869 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" event={"ID":"e3e68ee3-5e0b-4748-a03f-9c4d226b690c","Type":"ContainerDied","Data":"ecafdc7cdc2dae7678066e7130ab40263b63a5ed0b3c56b4166c9ed8b25c8222"} Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.960923 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecafdc7cdc2dae7678066e7130ab40263b63a5ed0b3c56b4166c9ed8b25c8222" Dec 12 05:22:29 crc kubenswrapper[4796]: I1212 05:22:29.961166 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-phst6" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.117449 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 12 05:23:13 crc kubenswrapper[4796]: E1212 05:23:13.118788 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="extract-utilities" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.118813 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="extract-utilities" Dec 12 05:23:13 crc kubenswrapper[4796]: E1212 05:23:13.118835 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="registry-server" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.118845 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="registry-server" Dec 12 05:23:13 crc kubenswrapper[4796]: E1212 05:23:13.118875 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e68ee3-5e0b-4748-a03f-9c4d226b690c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.118888 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e68ee3-5e0b-4748-a03f-9c4d226b690c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 12 05:23:13 crc kubenswrapper[4796]: E1212 05:23:13.118909 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="extract-content" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.118920 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="extract-content" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.119244 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5f182d-e1d8-4b58-bee1-f8864fcbc0b6" containerName="registry-server" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.119302 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e68ee3-5e0b-4748-a03f-9c4d226b690c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.120338 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.123153 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.124186 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g5wb6" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.124638 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.125856 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.140728 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.189679 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.189932 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.190061 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44dj\" (UniqueName: \"kubernetes.io/projected/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-kube-api-access-j44dj\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292127 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292190 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292503 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292657 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.292757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.293484 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.294523 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.303200 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.395235 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.395370 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.395527 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44dj\" (UniqueName: \"kubernetes.io/projected/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-kube-api-access-j44dj\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.395572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.395618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.395689 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.396380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.397018 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.397300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.400872 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.401881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.421048 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44dj\" (UniqueName: \"kubernetes.io/projected/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-kube-api-access-j44dj\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.444531 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.450863 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 05:23:13 crc kubenswrapper[4796]: I1212 05:23:13.961739 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 12 05:23:14 crc kubenswrapper[4796]: I1212 05:23:14.428254 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd","Type":"ContainerStarted","Data":"64fc8959b3929f6b525711ba5d2aeec44b60aca24064a83e25538c5d02aaf520"} Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.620320 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k8thb"] Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.625657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.632845 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8thb"] Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.720073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-catalog-content\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.720194 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknpc\" (UniqueName: \"kubernetes.io/projected/960c5aeb-f2e8-4a11-918f-f1941da099ec-kube-api-access-tknpc\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.720227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-utilities\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.810833 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mg5rl"] Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.812959 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.822029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-catalog-content\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.822113 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknpc\" (UniqueName: \"kubernetes.io/projected/960c5aeb-f2e8-4a11-918f-f1941da099ec-kube-api-access-tknpc\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.822138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-utilities\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.824225 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-catalog-content\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.824238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-utilities\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.846559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknpc\" (UniqueName: \"kubernetes.io/projected/960c5aeb-f2e8-4a11-918f-f1941da099ec-kube-api-access-tknpc\") pod \"redhat-marketplace-k8thb\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.861384 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mg5rl"] Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.924060 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrr55\" (UniqueName: \"kubernetes.io/projected/67b59299-8ff7-446e-b565-62933beca104-kube-api-access-lrr55\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.924382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-utilities\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.924653 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-catalog-content\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:25 crc kubenswrapper[4796]: I1212 05:23:25.960760 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.027049 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-catalog-content\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.027179 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrr55\" (UniqueName: \"kubernetes.io/projected/67b59299-8ff7-446e-b565-62933beca104-kube-api-access-lrr55\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.027200 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-utilities\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.027726 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-utilities\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.028695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-catalog-content\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.043459 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrr55\" (UniqueName: \"kubernetes.io/projected/67b59299-8ff7-446e-b565-62933beca104-kube-api-access-lrr55\") pod \"redhat-operators-mg5rl\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:26 crc kubenswrapper[4796]: I1212 05:23:26.189402 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:23:47 crc kubenswrapper[4796]: E1212 05:23:47.886084 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 12 05:23:47 crc kubenswrapper[4796]: E1212 05:23:47.892210 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j44dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 12 05:23:47 crc kubenswrapper[4796]: E1212 05:23:47.893747 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.291304 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8thb"] Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.356637 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mg5rl"] Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.765636 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b59299-8ff7-446e-b565-62933beca104" containerID="c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a" exitCode=0 Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.765690 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerDied","Data":"c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a"} Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.766102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerStarted","Data":"318eadfc642122007900e41a05a4774056f817300fc2af354802d9cbc176c094"} Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.767934 4796 generic.go:334] "Generic (PLEG): container finished" podID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerID="b20cd1caa7844daef65c14c70633934554ae90055c154a97799a1a7d29ab2b18" exitCode=0 Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.768624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerDied","Data":"b20cd1caa7844daef65c14c70633934554ae90055c154a97799a1a7d29ab2b18"} Dec 12 05:23:48 crc kubenswrapper[4796]: I1212 05:23:48.768665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerStarted","Data":"9bebfeaaa71e387295ab6eb549e64d37dd94be207bb9434809d7c4e1bc188e19"} Dec 12 05:23:48 crc kubenswrapper[4796]: E1212 05:23:48.770258 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" Dec 12 05:23:49 crc kubenswrapper[4796]: I1212 05:23:49.782595 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerStarted","Data":"846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1"} Dec 12 05:23:49 crc kubenswrapper[4796]: I1212 05:23:49.785650 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerStarted","Data":"9631b412af8ae632a081d8ea4db615777e1f9ca2385ab87f19ed97a950eb9dbf"} Dec 12 05:23:51 crc kubenswrapper[4796]: I1212 05:23:51.806550 4796 generic.go:334] "Generic (PLEG): container finished" podID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerID="9631b412af8ae632a081d8ea4db615777e1f9ca2385ab87f19ed97a950eb9dbf" exitCode=0 Dec 12 05:23:51 crc kubenswrapper[4796]: I1212 05:23:51.806628 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerDied","Data":"9631b412af8ae632a081d8ea4db615777e1f9ca2385ab87f19ed97a950eb9dbf"} Dec 12 05:23:56 crc kubenswrapper[4796]: I1212 05:23:56.867975 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b59299-8ff7-446e-b565-62933beca104" containerID="846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1" exitCode=0 Dec 12 05:23:56 crc kubenswrapper[4796]: I1212 05:23:56.868073 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerDied","Data":"846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1"} Dec 12 05:23:56 crc kubenswrapper[4796]: I1212 05:23:56.876345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerStarted","Data":"8315a50d826869b9bf4f3c6b286a239420dcdf10b1c88dec7b7c01347ba18c5c"} Dec 12 05:23:56 crc kubenswrapper[4796]: I1212 05:23:56.911846 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k8thb" podStartSLOduration=27.616047748 podStartE2EDuration="31.911827038s" podCreationTimestamp="2025-12-12 05:23:25 +0000 UTC" firstStartedPulling="2025-12-12 05:23:48.770152025 +0000 UTC m=+3019.646169172" lastFinishedPulling="2025-12-12 05:23:53.065931315 +0000 UTC m=+3023.941948462" observedRunningTime="2025-12-12 05:23:56.906953265 +0000 UTC m=+3027.782970412" watchObservedRunningTime="2025-12-12 05:23:56.911827038 +0000 UTC m=+3027.787844185" Dec 12 05:23:57 crc kubenswrapper[4796]: I1212 05:23:57.888575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerStarted","Data":"d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a"} Dec 12 05:23:57 crc kubenswrapper[4796]: I1212 05:23:57.911756 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mg5rl" podStartSLOduration=24.420153905 podStartE2EDuration="32.911736227s" podCreationTimestamp="2025-12-12 05:23:25 +0000 UTC" firstStartedPulling="2025-12-12 05:23:48.767680618 +0000 UTC m=+3019.643697775" lastFinishedPulling="2025-12-12 05:23:57.25926296 +0000 UTC m=+3028.135280097" observedRunningTime="2025-12-12 05:23:57.907132312 +0000 UTC m=+3028.783149459" watchObservedRunningTime="2025-12-12 05:23:57.911736227 +0000 UTC m=+3028.787753394" Dec 12 05:24:02 crc kubenswrapper[4796]: I1212 05:24:02.984827 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 12 05:24:04 crc kubenswrapper[4796]: I1212 05:24:04.950903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd","Type":"ContainerStarted","Data":"c31077ca704001a6615eaecea7dbdac674ec32e30484008d79c005e728d9eeae"} Dec 12 05:24:04 crc kubenswrapper[4796]: I1212 05:24:04.972907 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.97779314 podStartE2EDuration="52.972886622s" podCreationTimestamp="2025-12-12 05:23:12 +0000 UTC" firstStartedPulling="2025-12-12 05:23:13.98690318 +0000 UTC m=+2984.862920347" lastFinishedPulling="2025-12-12 05:24:02.981996692 +0000 UTC m=+3033.858013829" observedRunningTime="2025-12-12 05:24:04.967240175 +0000 UTC m=+3035.843257342" watchObservedRunningTime="2025-12-12 05:24:04.972886622 +0000 UTC m=+3035.848903779" Dec 12 05:24:05 crc kubenswrapper[4796]: I1212 05:24:05.961965 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:24:05 crc kubenswrapper[4796]: I1212 05:24:05.962492 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:24:06 crc kubenswrapper[4796]: I1212 05:24:06.016676 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:24:06 crc kubenswrapper[4796]: I1212 05:24:06.190472 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:24:06 crc kubenswrapper[4796]: I1212 05:24:06.191686 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:24:06 crc kubenswrapper[4796]: I1212 05:24:06.235335 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:24:07 crc kubenswrapper[4796]: I1212 05:24:07.038862 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:24:07 crc kubenswrapper[4796]: I1212 05:24:07.057530 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:24:07 crc kubenswrapper[4796]: I1212 05:24:07.652788 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8thb"] Dec 12 05:24:08 crc kubenswrapper[4796]: I1212 05:24:08.993484 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k8thb" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="registry-server" containerID="cri-o://8315a50d826869b9bf4f3c6b286a239420dcdf10b1c88dec7b7c01347ba18c5c" gracePeriod=2 Dec 12 05:24:09 crc kubenswrapper[4796]: I1212 05:24:09.049762 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mg5rl"] Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.006922 4796 generic.go:334] "Generic (PLEG): container finished" podID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerID="8315a50d826869b9bf4f3c6b286a239420dcdf10b1c88dec7b7c01347ba18c5c" exitCode=0 Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.007064 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerDied","Data":"8315a50d826869b9bf4f3c6b286a239420dcdf10b1c88dec7b7c01347ba18c5c"} Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.007413 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8thb" event={"ID":"960c5aeb-f2e8-4a11-918f-f1941da099ec","Type":"ContainerDied","Data":"9bebfeaaa71e387295ab6eb549e64d37dd94be207bb9434809d7c4e1bc188e19"} Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.007427 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bebfeaaa71e387295ab6eb549e64d37dd94be207bb9434809d7c4e1bc188e19" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.007539 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mg5rl" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="registry-server" containerID="cri-o://d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a" gracePeriod=2 Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.099080 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.195590 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-utilities\") pod \"960c5aeb-f2e8-4a11-918f-f1941da099ec\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.195645 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknpc\" (UniqueName: \"kubernetes.io/projected/960c5aeb-f2e8-4a11-918f-f1941da099ec-kube-api-access-tknpc\") pod \"960c5aeb-f2e8-4a11-918f-f1941da099ec\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.195685 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-catalog-content\") pod \"960c5aeb-f2e8-4a11-918f-f1941da099ec\" (UID: \"960c5aeb-f2e8-4a11-918f-f1941da099ec\") " Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.196763 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-utilities" (OuterVolumeSpecName: "utilities") pod "960c5aeb-f2e8-4a11-918f-f1941da099ec" (UID: "960c5aeb-f2e8-4a11-918f-f1941da099ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.203467 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960c5aeb-f2e8-4a11-918f-f1941da099ec-kube-api-access-tknpc" (OuterVolumeSpecName: "kube-api-access-tknpc") pod "960c5aeb-f2e8-4a11-918f-f1941da099ec" (UID: "960c5aeb-f2e8-4a11-918f-f1941da099ec"). InnerVolumeSpecName "kube-api-access-tknpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.243833 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "960c5aeb-f2e8-4a11-918f-f1941da099ec" (UID: "960c5aeb-f2e8-4a11-918f-f1941da099ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.297812 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.297847 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknpc\" (UniqueName: \"kubernetes.io/projected/960c5aeb-f2e8-4a11-918f-f1941da099ec-kube-api-access-tknpc\") on node \"crc\" DevicePath \"\"" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.297857 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960c5aeb-f2e8-4a11-918f-f1941da099ec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.444994 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.603060 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-catalog-content\") pod \"67b59299-8ff7-446e-b565-62933beca104\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.603150 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-utilities\") pod \"67b59299-8ff7-446e-b565-62933beca104\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.603371 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrr55\" (UniqueName: \"kubernetes.io/projected/67b59299-8ff7-446e-b565-62933beca104-kube-api-access-lrr55\") pod \"67b59299-8ff7-446e-b565-62933beca104\" (UID: \"67b59299-8ff7-446e-b565-62933beca104\") " Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.604126 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-utilities" (OuterVolumeSpecName: "utilities") pod "67b59299-8ff7-446e-b565-62933beca104" (UID: "67b59299-8ff7-446e-b565-62933beca104"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.615527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b59299-8ff7-446e-b565-62933beca104-kube-api-access-lrr55" (OuterVolumeSpecName: "kube-api-access-lrr55") pod "67b59299-8ff7-446e-b565-62933beca104" (UID: "67b59299-8ff7-446e-b565-62933beca104"). InnerVolumeSpecName "kube-api-access-lrr55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.705743 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrr55\" (UniqueName: \"kubernetes.io/projected/67b59299-8ff7-446e-b565-62933beca104-kube-api-access-lrr55\") on node \"crc\" DevicePath \"\"" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.705793 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.722938 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67b59299-8ff7-446e-b565-62933beca104" (UID: "67b59299-8ff7-446e-b565-62933beca104"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:24:10 crc kubenswrapper[4796]: I1212 05:24:10.807170 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b59299-8ff7-446e-b565-62933beca104-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.019189 4796 generic.go:334] "Generic (PLEG): container finished" podID="67b59299-8ff7-446e-b565-62933beca104" containerID="d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a" exitCode=0 Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.019676 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8thb" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.019309 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg5rl" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.019331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerDied","Data":"d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a"} Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.019986 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg5rl" event={"ID":"67b59299-8ff7-446e-b565-62933beca104","Type":"ContainerDied","Data":"318eadfc642122007900e41a05a4774056f817300fc2af354802d9cbc176c094"} Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.020026 4796 scope.go:117] "RemoveContainer" containerID="d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.075603 4796 scope.go:117] "RemoveContainer" containerID="846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.087415 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8thb"] Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.095617 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8thb"] Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.110849 4796 scope.go:117] "RemoveContainer" containerID="c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.112641 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mg5rl"] Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.149125 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mg5rl"] Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.179591 4796 scope.go:117] "RemoveContainer" containerID="d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a" Dec 12 05:24:11 crc kubenswrapper[4796]: E1212 05:24:11.181559 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a\": container with ID starting with d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a not found: ID does not exist" containerID="d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.181606 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a"} err="failed to get container status \"d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a\": rpc error: code = NotFound desc = could not find container \"d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a\": container with ID starting with d893801296ef89de2a81121d65dd8d735bb9e3ac5e78d8a59c58b3828fb6a40a not found: ID does not exist" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.181634 4796 scope.go:117] "RemoveContainer" containerID="846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1" Dec 12 05:24:11 crc kubenswrapper[4796]: E1212 05:24:11.182164 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1\": container with ID starting with 846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1 not found: ID does not exist" containerID="846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.182191 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1"} err="failed to get container status \"846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1\": rpc error: code = NotFound desc = could not find container \"846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1\": container with ID starting with 846a2750c2070bc7889ebe2bdeb1b99689ff98aa6da508712b47e04ffd7c8ec1 not found: ID does not exist" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.182209 4796 scope.go:117] "RemoveContainer" containerID="c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a" Dec 12 05:24:11 crc kubenswrapper[4796]: E1212 05:24:11.182558 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a\": container with ID starting with c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a not found: ID does not exist" containerID="c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.182586 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a"} err="failed to get container status \"c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a\": rpc error: code = NotFound desc = could not find container \"c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a\": container with ID starting with c962ce3174d87f12a15d1f355c8c2da872f8196609fd6767bd0fd13bdfd3614a not found: ID does not exist" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.423229 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b59299-8ff7-446e-b565-62933beca104" path="/var/lib/kubelet/pods/67b59299-8ff7-446e-b565-62933beca104/volumes" Dec 12 05:24:11 crc kubenswrapper[4796]: I1212 05:24:11.423972 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" path="/var/lib/kubelet/pods/960c5aeb-f2e8-4a11-918f-f1941da099ec/volumes" Dec 12 05:24:32 crc kubenswrapper[4796]: I1212 05:24:32.969559 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:24:32 crc kubenswrapper[4796]: I1212 05:24:32.970329 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:25:02 crc kubenswrapper[4796]: I1212 05:25:02.969956 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:25:02 crc kubenswrapper[4796]: I1212 05:25:02.970575 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:25:32 crc kubenswrapper[4796]: I1212 05:25:32.969637 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:25:32 crc kubenswrapper[4796]: I1212 05:25:32.970376 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:25:32 crc kubenswrapper[4796]: I1212 05:25:32.970428 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:25:32 crc kubenswrapper[4796]: I1212 05:25:32.971118 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:25:32 crc kubenswrapper[4796]: I1212 05:25:32.971162 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" gracePeriod=600 Dec 12 05:25:33 crc kubenswrapper[4796]: E1212 05:25:33.109790 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:25:33 crc kubenswrapper[4796]: I1212 05:25:33.858494 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" exitCode=0 Dec 12 05:25:33 crc kubenswrapper[4796]: I1212 05:25:33.858555 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4"} Dec 12 05:25:33 crc kubenswrapper[4796]: I1212 05:25:33.858821 4796 scope.go:117] "RemoveContainer" containerID="cb81b90ccdfc895836986a68363825d029bee91243137cce996dc96ac7b86f48" Dec 12 05:25:33 crc kubenswrapper[4796]: I1212 05:25:33.859776 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:25:33 crc kubenswrapper[4796]: E1212 05:25:33.860221 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:25:44 crc kubenswrapper[4796]: I1212 05:25:44.411004 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:25:44 crc kubenswrapper[4796]: E1212 05:25:44.411847 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:25:55 crc kubenswrapper[4796]: I1212 05:25:55.411783 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:25:55 crc kubenswrapper[4796]: E1212 05:25:55.412568 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:26:08 crc kubenswrapper[4796]: I1212 05:26:08.411839 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:26:08 crc kubenswrapper[4796]: E1212 05:26:08.412601 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:26:21 crc kubenswrapper[4796]: I1212 05:26:21.411229 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:26:21 crc kubenswrapper[4796]: E1212 05:26:21.412017 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:26:35 crc kubenswrapper[4796]: I1212 05:26:35.411763 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:26:35 crc kubenswrapper[4796]: E1212 05:26:35.412834 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:26:50 crc kubenswrapper[4796]: I1212 05:26:50.412232 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:26:50 crc kubenswrapper[4796]: E1212 05:26:50.413071 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:27:01 crc kubenswrapper[4796]: I1212 05:27:01.412562 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:27:01 crc kubenswrapper[4796]: E1212 05:27:01.413373 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:27:14 crc kubenswrapper[4796]: I1212 05:27:14.411404 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:27:14 crc kubenswrapper[4796]: E1212 05:27:14.413273 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:27:26 crc kubenswrapper[4796]: I1212 05:27:26.411801 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:27:26 crc kubenswrapper[4796]: E1212 05:27:26.412536 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:27:38 crc kubenswrapper[4796]: I1212 05:27:38.412895 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:27:38 crc kubenswrapper[4796]: E1212 05:27:38.413824 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:27:53 crc kubenswrapper[4796]: I1212 05:27:53.410931 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:27:53 crc kubenswrapper[4796]: E1212 05:27:53.411710 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:28:08 crc kubenswrapper[4796]: I1212 05:28:08.411908 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:28:08 crc kubenswrapper[4796]: E1212 05:28:08.412774 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:28:19 crc kubenswrapper[4796]: I1212 05:28:19.418796 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:28:19 crc kubenswrapper[4796]: E1212 05:28:19.420934 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.680813 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x"] Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.681582 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" podUID="115ae990-f683-4fd1-a4e4-4eef88a10f24" containerName="route-controller-manager" containerID="cri-o://85c3993a031cc83eccb973b69625f3b1391af83b656e796008df0e838bf4f9a7" gracePeriod=30 Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.687945 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h"] Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.688348 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" podUID="2cd00450-0856-4304-87dc-49a681645acd" containerName="controller-manager" containerID="cri-o://9e10cb2a71d2cd1a37c9c140f9806072158e8a2c4c3807d70779b7480944b5f0" gracePeriod=30 Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.834133 4796 generic.go:334] "Generic (PLEG): container finished" podID="2cd00450-0856-4304-87dc-49a681645acd" containerID="9e10cb2a71d2cd1a37c9c140f9806072158e8a2c4c3807d70779b7480944b5f0" exitCode=0 Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.834203 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" event={"ID":"2cd00450-0856-4304-87dc-49a681645acd","Type":"ContainerDied","Data":"9e10cb2a71d2cd1a37c9c140f9806072158e8a2c4c3807d70779b7480944b5f0"} Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.836309 4796 generic.go:334] "Generic (PLEG): container finished" podID="115ae990-f683-4fd1-a4e4-4eef88a10f24" containerID="85c3993a031cc83eccb973b69625f3b1391af83b656e796008df0e838bf4f9a7" exitCode=0 Dec 12 05:28:27 crc kubenswrapper[4796]: I1212 05:28:27.836348 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" event={"ID":"115ae990-f683-4fd1-a4e4-4eef88a10f24","Type":"ContainerDied","Data":"85c3993a031cc83eccb973b69625f3b1391af83b656e796008df0e838bf4f9a7"} Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.661616 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.758601 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.833049 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4c9m\" (UniqueName: \"kubernetes.io/projected/2cd00450-0856-4304-87dc-49a681645acd-kube-api-access-g4c9m\") pod \"2cd00450-0856-4304-87dc-49a681645acd\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.833576 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd00450-0856-4304-87dc-49a681645acd-serving-cert\") pod \"2cd00450-0856-4304-87dc-49a681645acd\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.833618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-client-ca\") pod \"2cd00450-0856-4304-87dc-49a681645acd\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.833690 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-proxy-ca-bundles\") pod \"2cd00450-0856-4304-87dc-49a681645acd\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.833778 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-config\") pod \"2cd00450-0856-4304-87dc-49a681645acd\" (UID: \"2cd00450-0856-4304-87dc-49a681645acd\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.834684 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cd00450-0856-4304-87dc-49a681645acd" (UID: "2cd00450-0856-4304-87dc-49a681645acd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.834848 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-config" (OuterVolumeSpecName: "config") pod "2cd00450-0856-4304-87dc-49a681645acd" (UID: "2cd00450-0856-4304-87dc-49a681645acd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.835078 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2cd00450-0856-4304-87dc-49a681645acd" (UID: "2cd00450-0856-4304-87dc-49a681645acd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.839835 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd00450-0856-4304-87dc-49a681645acd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cd00450-0856-4304-87dc-49a681645acd" (UID: "2cd00450-0856-4304-87dc-49a681645acd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.842500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd00450-0856-4304-87dc-49a681645acd-kube-api-access-g4c9m" (OuterVolumeSpecName: "kube-api-access-g4c9m") pod "2cd00450-0856-4304-87dc-49a681645acd" (UID: "2cd00450-0856-4304-87dc-49a681645acd"). InnerVolumeSpecName "kube-api-access-g4c9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.862539 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.862549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x" event={"ID":"115ae990-f683-4fd1-a4e4-4eef88a10f24","Type":"ContainerDied","Data":"deaf2967ddcefd775f16227c8c6a6568907c3bc041b99c32bac9ed900c07abf8"} Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.862616 4796 scope.go:117] "RemoveContainer" containerID="85c3993a031cc83eccb973b69625f3b1391af83b656e796008df0e838bf4f9a7" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.868549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" event={"ID":"2cd00450-0856-4304-87dc-49a681645acd","Type":"ContainerDied","Data":"605c725aff246c0859c16e43406aa3e452b7131cc5003919bf7946f21dff5394"} Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.868631 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.934834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-client-ca\") pod \"115ae990-f683-4fd1-a4e4-4eef88a10f24\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.934914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115ae990-f683-4fd1-a4e4-4eef88a10f24-serving-cert\") pod \"115ae990-f683-4fd1-a4e4-4eef88a10f24\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkk8\" (UniqueName: \"kubernetes.io/projected/115ae990-f683-4fd1-a4e4-4eef88a10f24-kube-api-access-gqkk8\") pod \"115ae990-f683-4fd1-a4e4-4eef88a10f24\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935137 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-config\") pod \"115ae990-f683-4fd1-a4e4-4eef88a10f24\" (UID: \"115ae990-f683-4fd1-a4e4-4eef88a10f24\") " Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935580 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4c9m\" (UniqueName: \"kubernetes.io/projected/2cd00450-0856-4304-87dc-49a681645acd-kube-api-access-g4c9m\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935593 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd00450-0856-4304-87dc-49a681645acd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935602 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935610 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.935618 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd00450-0856-4304-87dc-49a681645acd-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.936003 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-client-ca" (OuterVolumeSpecName: "client-ca") pod "115ae990-f683-4fd1-a4e4-4eef88a10f24" (UID: "115ae990-f683-4fd1-a4e4-4eef88a10f24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.936033 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-config" (OuterVolumeSpecName: "config") pod "115ae990-f683-4fd1-a4e4-4eef88a10f24" (UID: "115ae990-f683-4fd1-a4e4-4eef88a10f24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.951460 4796 scope.go:117] "RemoveContainer" containerID="9e10cb2a71d2cd1a37c9c140f9806072158e8a2c4c3807d70779b7480944b5f0" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.951991 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115ae990-f683-4fd1-a4e4-4eef88a10f24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "115ae990-f683-4fd1-a4e4-4eef88a10f24" (UID: "115ae990-f683-4fd1-a4e4-4eef88a10f24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:28:28 crc kubenswrapper[4796]: I1212 05:28:28.954497 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115ae990-f683-4fd1-a4e4-4eef88a10f24-kube-api-access-gqkk8" (OuterVolumeSpecName: "kube-api-access-gqkk8") pod "115ae990-f683-4fd1-a4e4-4eef88a10f24" (UID: "115ae990-f683-4fd1-a4e4-4eef88a10f24"). InnerVolumeSpecName "kube-api-access-gqkk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.037735 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/115ae990-f683-4fd1-a4e4-4eef88a10f24-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.037780 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkk8\" (UniqueName: \"kubernetes.io/projected/115ae990-f683-4fd1-a4e4-4eef88a10f24-kube-api-access-gqkk8\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.037796 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.037807 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/115ae990-f683-4fd1-a4e4-4eef88a10f24-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.049979 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.057981 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d64c9b9bf-59k6h"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.192682 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.204523 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6d7979b4-8d46x"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.425713 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115ae990-f683-4fd1-a4e4-4eef88a10f24" path="/var/lib/kubelet/pods/115ae990-f683-4fd1-a4e4-4eef88a10f24/volumes" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.426692 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd00450-0856-4304-87dc-49a681645acd" path="/var/lib/kubelet/pods/2cd00450-0856-4304-87dc-49a681645acd/volumes" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.451616 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784464b967-jggxj"] Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452056 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="registry-server" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452072 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="registry-server" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452086 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="extract-utilities" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452094 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="extract-utilities" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452122 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd00450-0856-4304-87dc-49a681645acd" containerName="controller-manager" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452128 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd00450-0856-4304-87dc-49a681645acd" containerName="controller-manager" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452139 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="extract-content" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452145 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="extract-content" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452185 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="registry-server" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452192 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="registry-server" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452204 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115ae990-f683-4fd1-a4e4-4eef88a10f24" containerName="route-controller-manager" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452210 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="115ae990-f683-4fd1-a4e4-4eef88a10f24" containerName="route-controller-manager" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452224 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="extract-content" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452229 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="extract-content" Dec 12 05:28:29 crc kubenswrapper[4796]: E1212 05:28:29.452239 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="extract-utilities" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452244 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="extract-utilities" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452426 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="115ae990-f683-4fd1-a4e4-4eef88a10f24" containerName="route-controller-manager" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452445 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b59299-8ff7-446e-b565-62933beca104" containerName="registry-server" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452457 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="960c5aeb-f2e8-4a11-918f-f1941da099ec" containerName="registry-server" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.452470 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd00450-0856-4304-87dc-49a681645acd" containerName="controller-manager" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.453091 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.456675 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.457068 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.457139 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.457163 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.457219 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.457503 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.461767 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784464b967-jggxj"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.462490 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.546702 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-proxy-ca-bundles\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.546950 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee74affd-a878-46ee-b08c-2fcdc8d1844c-serving-cert\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.546992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prx4f\" (UniqueName: \"kubernetes.io/projected/ee74affd-a878-46ee-b08c-2fcdc8d1844c-kube-api-access-prx4f\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.547034 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-client-ca\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.547124 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-config\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.649154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-proxy-ca-bundles\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.649221 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee74affd-a878-46ee-b08c-2fcdc8d1844c-serving-cert\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.649257 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prx4f\" (UniqueName: \"kubernetes.io/projected/ee74affd-a878-46ee-b08c-2fcdc8d1844c-kube-api-access-prx4f\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.649309 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-client-ca\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.649386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-config\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.650576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-client-ca\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.650825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-config\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.651383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee74affd-a878-46ee-b08c-2fcdc8d1844c-proxy-ca-bundles\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.655096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee74affd-a878-46ee-b08c-2fcdc8d1844c-serving-cert\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.675244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prx4f\" (UniqueName: \"kubernetes.io/projected/ee74affd-a878-46ee-b08c-2fcdc8d1844c-kube-api-access-prx4f\") pod \"controller-manager-784464b967-jggxj\" (UID: \"ee74affd-a878-46ee-b08c-2fcdc8d1844c\") " pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.693835 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.698363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.708219 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.709059 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.710462 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.710865 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.713064 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.713940 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.721088 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv"] Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.769979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.853110 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-serving-cert\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.853494 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-client-ca\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.853719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-config\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.853939 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllnl\" (UniqueName: \"kubernetes.io/projected/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-kube-api-access-sllnl\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.955504 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-serving-cert\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.956700 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-client-ca\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.956766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-config\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.956808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sllnl\" (UniqueName: \"kubernetes.io/projected/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-kube-api-access-sllnl\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.957848 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-client-ca\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.958620 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-config\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.959952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-serving-cert\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:29 crc kubenswrapper[4796]: I1212 05:28:29.988310 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sllnl\" (UniqueName: \"kubernetes.io/projected/76073367-d16a-4ce8-8d76-f6dcde4c0c3d-kube-api-access-sllnl\") pod \"route-controller-manager-67dcb476b6-7hpxv\" (UID: \"76073367-d16a-4ce8-8d76-f6dcde4c0c3d\") " pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.037916 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.281331 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784464b967-jggxj"] Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.412716 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:28:30 crc kubenswrapper[4796]: E1212 05:28:30.413101 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:28:30 crc kubenswrapper[4796]: W1212 05:28:30.564794 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76073367_d16a_4ce8_8d76_f6dcde4c0c3d.slice/crio-696c234eb3d7625f88f469d04440d31059cdba2efa0c6d66c0935680cdb7b888 WatchSource:0}: Error finding container 696c234eb3d7625f88f469d04440d31059cdba2efa0c6d66c0935680cdb7b888: Status 404 returned error can't find the container with id 696c234eb3d7625f88f469d04440d31059cdba2efa0c6d66c0935680cdb7b888 Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.569171 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv"] Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.905018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" event={"ID":"ee74affd-a878-46ee-b08c-2fcdc8d1844c","Type":"ContainerStarted","Data":"427daf345e626b9080d50a1a656fbabdb3ccffa0266f2b003cfa15d85db4f598"} Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.905391 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" event={"ID":"ee74affd-a878-46ee-b08c-2fcdc8d1844c","Type":"ContainerStarted","Data":"fbedbba3d80cf9733d62c366941ef51c9d7bf4df4f63b26639095e7d7e5e2e83"} Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.905745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.907718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" event={"ID":"76073367-d16a-4ce8-8d76-f6dcde4c0c3d","Type":"ContainerStarted","Data":"843e8b8720ac1e12a78751d2dd5bad8b0cd40193acd170ff59fae7d8fb76a8e6"} Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.907752 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" event={"ID":"76073367-d16a-4ce8-8d76-f6dcde4c0c3d","Type":"ContainerStarted","Data":"696c234eb3d7625f88f469d04440d31059cdba2efa0c6d66c0935680cdb7b888"} Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.908649 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.910358 4796 patch_prober.go:28] interesting pod/route-controller-manager-67dcb476b6-7hpxv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.245:8443/healthz\": dial tcp 10.217.0.245:8443: connect: connection refused" start-of-body= Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.910402 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" podUID="76073367-d16a-4ce8-8d76-f6dcde4c0c3d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.245:8443/healthz\": dial tcp 10.217.0.245:8443: connect: connection refused" Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.911083 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" Dec 12 05:28:30 crc kubenswrapper[4796]: I1212 05:28:30.937184 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784464b967-jggxj" podStartSLOduration=1.9371622990000001 podStartE2EDuration="1.937162299s" podCreationTimestamp="2025-12-12 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:28:30.935406284 +0000 UTC m=+3301.811423441" watchObservedRunningTime="2025-12-12 05:28:30.937162299 +0000 UTC m=+3301.813179456" Dec 12 05:28:31 crc kubenswrapper[4796]: I1212 05:28:31.021505 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" podStartSLOduration=4.021488571 podStartE2EDuration="4.021488571s" podCreationTimestamp="2025-12-12 05:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:28:30.985880232 +0000 UTC m=+3301.861897379" watchObservedRunningTime="2025-12-12 05:28:31.021488571 +0000 UTC m=+3301.897505718" Dec 12 05:28:31 crc kubenswrapper[4796]: I1212 05:28:31.920157 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67dcb476b6-7hpxv" Dec 12 05:28:45 crc kubenswrapper[4796]: I1212 05:28:45.411386 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:28:45 crc kubenswrapper[4796]: E1212 05:28:45.412077 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:28:57 crc kubenswrapper[4796]: I1212 05:28:57.411806 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:28:57 crc kubenswrapper[4796]: E1212 05:28:57.412821 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:29:10 crc kubenswrapper[4796]: I1212 05:29:10.411953 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:29:10 crc kubenswrapper[4796]: E1212 05:29:10.412732 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:29:24 crc kubenswrapper[4796]: I1212 05:29:24.411726 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:29:24 crc kubenswrapper[4796]: E1212 05:29:24.412315 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:29:36 crc kubenswrapper[4796]: I1212 05:29:36.411622 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:29:36 crc kubenswrapper[4796]: E1212 05:29:36.412471 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:29:50 crc kubenswrapper[4796]: I1212 05:29:50.411824 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:29:50 crc kubenswrapper[4796]: E1212 05:29:50.412786 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.159622 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh"] Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.161457 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.163556 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.163795 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.171559 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh"] Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.263216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-secret-volume\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.263302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xqz\" (UniqueName: \"kubernetes.io/projected/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-kube-api-access-b8xqz\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.263393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-config-volume\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.365545 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-secret-volume\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.365618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xqz\" (UniqueName: \"kubernetes.io/projected/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-kube-api-access-b8xqz\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.365674 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-config-volume\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.366838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-config-volume\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.383102 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-secret-volume\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.384569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xqz\" (UniqueName: \"kubernetes.io/projected/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-kube-api-access-b8xqz\") pod \"collect-profiles-29425290-h4hdh\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.528041 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:00 crc kubenswrapper[4796]: I1212 05:30:00.995139 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh"] Dec 12 05:30:01 crc kubenswrapper[4796]: I1212 05:30:01.768916 4796 generic.go:334] "Generic (PLEG): container finished" podID="4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" containerID="a4b07c5c8afbadd9caa7806f65fde4583061ee6a4865a61ec078160587734bb3" exitCode=0 Dec 12 05:30:01 crc kubenswrapper[4796]: I1212 05:30:01.769022 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" event={"ID":"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555","Type":"ContainerDied","Data":"a4b07c5c8afbadd9caa7806f65fde4583061ee6a4865a61ec078160587734bb3"} Dec 12 05:30:01 crc kubenswrapper[4796]: I1212 05:30:01.769568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" event={"ID":"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555","Type":"ContainerStarted","Data":"2d811cad557cce9cea8e5a205819219c240fd83c2883f69d3e18563a58feacbb"} Dec 12 05:30:02 crc kubenswrapper[4796]: I1212 05:30:02.411392 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:30:02 crc kubenswrapper[4796]: E1212 05:30:02.411730 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.284593 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.382894 4796 scope.go:117] "RemoveContainer" containerID="8315a50d826869b9bf4f3c6b286a239420dcdf10b1c88dec7b7c01347ba18c5c" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.404860 4796 scope.go:117] "RemoveContainer" containerID="b20cd1caa7844daef65c14c70633934554ae90055c154a97799a1a7d29ab2b18" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.435864 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-secret-volume\") pod \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.436096 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8xqz\" (UniqueName: \"kubernetes.io/projected/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-kube-api-access-b8xqz\") pod \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.436148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-config-volume\") pod \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\" (UID: \"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555\") " Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.437104 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-config-volume" (OuterVolumeSpecName: "config-volume") pod "4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" (UID: "4682f9a8-e3b7-4401-a3d6-ccd58c3a0555"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.439662 4796 scope.go:117] "RemoveContainer" containerID="9631b412af8ae632a081d8ea4db615777e1f9ca2385ab87f19ed97a950eb9dbf" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.442704 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" (UID: "4682f9a8-e3b7-4401-a3d6-ccd58c3a0555"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.443811 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-kube-api-access-b8xqz" (OuterVolumeSpecName: "kube-api-access-b8xqz") pod "4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" (UID: "4682f9a8-e3b7-4401-a3d6-ccd58c3a0555"). InnerVolumeSpecName "kube-api-access-b8xqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.538648 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8xqz\" (UniqueName: \"kubernetes.io/projected/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-kube-api-access-b8xqz\") on node \"crc\" DevicePath \"\"" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.539185 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.539300 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682f9a8-e3b7-4401-a3d6-ccd58c3a0555-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.797648 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" event={"ID":"4682f9a8-e3b7-4401-a3d6-ccd58c3a0555","Type":"ContainerDied","Data":"2d811cad557cce9cea8e5a205819219c240fd83c2883f69d3e18563a58feacbb"} Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.797695 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d811cad557cce9cea8e5a205819219c240fd83c2883f69d3e18563a58feacbb" Dec 12 05:30:03 crc kubenswrapper[4796]: I1212 05:30:03.797713 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425290-h4hdh" Dec 12 05:30:04 crc kubenswrapper[4796]: I1212 05:30:04.366948 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf"] Dec 12 05:30:04 crc kubenswrapper[4796]: I1212 05:30:04.374939 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425245-gkhxf"] Dec 12 05:30:05 crc kubenswrapper[4796]: I1212 05:30:05.429687 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffeb1a8-57e6-4f77-a770-a48e7f99910f" path="/var/lib/kubelet/pods/0ffeb1a8-57e6-4f77-a770-a48e7f99910f/volumes" Dec 12 05:30:13 crc kubenswrapper[4796]: I1212 05:30:13.466540 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:30:13 crc kubenswrapper[4796]: E1212 05:30:13.467329 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:30:26 crc kubenswrapper[4796]: I1212 05:30:26.411119 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:30:26 crc kubenswrapper[4796]: E1212 05:30:26.411873 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:30:40 crc kubenswrapper[4796]: I1212 05:30:40.411106 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:30:41 crc kubenswrapper[4796]: I1212 05:30:41.115592 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"29b3c9440712942defcce77886caa07d4c33b8cae6f4614f02bc0a8de7d5d1aa"} Dec 12 05:30:42 crc kubenswrapper[4796]: I1212 05:30:42.919514 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 05:31:03 crc kubenswrapper[4796]: I1212 05:31:03.545018 4796 scope.go:117] "RemoveContainer" containerID="01893e39c3f3a0ff3aea579b28212e83be4cd6d80df7503186350345b176bff2" Dec 12 05:32:09 crc kubenswrapper[4796]: I1212 05:32:09.969252 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6w2f9"] Dec 12 05:32:09 crc kubenswrapper[4796]: E1212 05:32:09.970152 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" containerName="collect-profiles" Dec 12 05:32:09 crc kubenswrapper[4796]: I1212 05:32:09.970165 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" containerName="collect-profiles" Dec 12 05:32:09 crc kubenswrapper[4796]: I1212 05:32:09.970357 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682f9a8-e3b7-4401-a3d6-ccd58c3a0555" containerName="collect-profiles" Dec 12 05:32:09 crc kubenswrapper[4796]: I1212 05:32:09.971660 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:09 crc kubenswrapper[4796]: I1212 05:32:09.982902 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6w2f9"] Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.103303 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-catalog-content\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.103381 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqzp\" (UniqueName: \"kubernetes.io/projected/b483e241-0c95-491a-98f6-c04bf6829bde-kube-api-access-ljqzp\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.103508 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-utilities\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.205707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-catalog-content\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.205796 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljqzp\" (UniqueName: \"kubernetes.io/projected/b483e241-0c95-491a-98f6-c04bf6829bde-kube-api-access-ljqzp\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.206564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-catalog-content\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.206713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-utilities\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.207030 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-utilities\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.230347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljqzp\" (UniqueName: \"kubernetes.io/projected/b483e241-0c95-491a-98f6-c04bf6829bde-kube-api-access-ljqzp\") pod \"certified-operators-6w2f9\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.303733 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.885980 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6w2f9"] Dec 12 05:32:10 crc kubenswrapper[4796]: I1212 05:32:10.917082 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerStarted","Data":"4f603a7c09e7bbb9fa945e2edc4fcc269ebfe62e722a912b15722cdc45d029f7"} Dec 12 05:32:11 crc kubenswrapper[4796]: I1212 05:32:11.927752 4796 generic.go:334] "Generic (PLEG): container finished" podID="b483e241-0c95-491a-98f6-c04bf6829bde" containerID="ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7" exitCode=0 Dec 12 05:32:11 crc kubenswrapper[4796]: I1212 05:32:11.927868 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerDied","Data":"ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7"} Dec 12 05:32:11 crc kubenswrapper[4796]: I1212 05:32:11.930933 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:32:12 crc kubenswrapper[4796]: I1212 05:32:12.938353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerStarted","Data":"75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8"} Dec 12 05:32:14 crc kubenswrapper[4796]: I1212 05:32:14.957555 4796 generic.go:334] "Generic (PLEG): container finished" podID="b483e241-0c95-491a-98f6-c04bf6829bde" containerID="75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8" exitCode=0 Dec 12 05:32:14 crc kubenswrapper[4796]: I1212 05:32:14.957598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerDied","Data":"75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8"} Dec 12 05:32:15 crc kubenswrapper[4796]: I1212 05:32:15.968408 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerStarted","Data":"f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86"} Dec 12 05:32:16 crc kubenswrapper[4796]: I1212 05:32:16.002806 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6w2f9" podStartSLOduration=3.436143648 podStartE2EDuration="7.002784661s" podCreationTimestamp="2025-12-12 05:32:09 +0000 UTC" firstStartedPulling="2025-12-12 05:32:11.930716542 +0000 UTC m=+3522.806733689" lastFinishedPulling="2025-12-12 05:32:15.497357565 +0000 UTC m=+3526.373374702" observedRunningTime="2025-12-12 05:32:15.999511689 +0000 UTC m=+3526.875528846" watchObservedRunningTime="2025-12-12 05:32:16.002784661 +0000 UTC m=+3526.878801808" Dec 12 05:32:20 crc kubenswrapper[4796]: I1212 05:32:20.304989 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:20 crc kubenswrapper[4796]: I1212 05:32:20.305740 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:20 crc kubenswrapper[4796]: I1212 05:32:20.356110 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:21 crc kubenswrapper[4796]: I1212 05:32:21.083096 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:21 crc kubenswrapper[4796]: I1212 05:32:21.135336 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6w2f9"] Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.051095 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6w2f9" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="registry-server" containerID="cri-o://f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86" gracePeriod=2 Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.704507 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.882912 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-utilities\") pod \"b483e241-0c95-491a-98f6-c04bf6829bde\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.883015 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-catalog-content\") pod \"b483e241-0c95-491a-98f6-c04bf6829bde\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.884017 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-utilities" (OuterVolumeSpecName: "utilities") pod "b483e241-0c95-491a-98f6-c04bf6829bde" (UID: "b483e241-0c95-491a-98f6-c04bf6829bde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.888346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljqzp\" (UniqueName: \"kubernetes.io/projected/b483e241-0c95-491a-98f6-c04bf6829bde-kube-api-access-ljqzp\") pod \"b483e241-0c95-491a-98f6-c04bf6829bde\" (UID: \"b483e241-0c95-491a-98f6-c04bf6829bde\") " Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.889184 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.895601 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b483e241-0c95-491a-98f6-c04bf6829bde-kube-api-access-ljqzp" (OuterVolumeSpecName: "kube-api-access-ljqzp") pod "b483e241-0c95-491a-98f6-c04bf6829bde" (UID: "b483e241-0c95-491a-98f6-c04bf6829bde"). InnerVolumeSpecName "kube-api-access-ljqzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.950844 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b483e241-0c95-491a-98f6-c04bf6829bde" (UID: "b483e241-0c95-491a-98f6-c04bf6829bde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.991126 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483e241-0c95-491a-98f6-c04bf6829bde-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:32:23 crc kubenswrapper[4796]: I1212 05:32:23.991157 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljqzp\" (UniqueName: \"kubernetes.io/projected/b483e241-0c95-491a-98f6-c04bf6829bde-kube-api-access-ljqzp\") on node \"crc\" DevicePath \"\"" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.061845 4796 generic.go:334] "Generic (PLEG): container finished" podID="b483e241-0c95-491a-98f6-c04bf6829bde" containerID="f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86" exitCode=0 Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.061896 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerDied","Data":"f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86"} Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.061919 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6w2f9" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.061934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6w2f9" event={"ID":"b483e241-0c95-491a-98f6-c04bf6829bde","Type":"ContainerDied","Data":"4f603a7c09e7bbb9fa945e2edc4fcc269ebfe62e722a912b15722cdc45d029f7"} Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.062022 4796 scope.go:117] "RemoveContainer" containerID="f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.096645 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6w2f9"] Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.102978 4796 scope.go:117] "RemoveContainer" containerID="75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.106967 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6w2f9"] Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.132108 4796 scope.go:117] "RemoveContainer" containerID="ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.175080 4796 scope.go:117] "RemoveContainer" containerID="f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86" Dec 12 05:32:24 crc kubenswrapper[4796]: E1212 05:32:24.175668 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86\": container with ID starting with f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86 not found: ID does not exist" containerID="f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.175711 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86"} err="failed to get container status \"f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86\": rpc error: code = NotFound desc = could not find container \"f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86\": container with ID starting with f5553baf0ebb92b6097d746599b3fa21d34c4f7eff9668f18b74f22c32575b86 not found: ID does not exist" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.175738 4796 scope.go:117] "RemoveContainer" containerID="75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8" Dec 12 05:32:24 crc kubenswrapper[4796]: E1212 05:32:24.176136 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8\": container with ID starting with 75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8 not found: ID does not exist" containerID="75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.176170 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8"} err="failed to get container status \"75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8\": rpc error: code = NotFound desc = could not find container \"75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8\": container with ID starting with 75374d8bea564a51cae9d95bc0d6206ca720800ced9ca946075b61ca5d9be1c8 not found: ID does not exist" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.176197 4796 scope.go:117] "RemoveContainer" containerID="ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7" Dec 12 05:32:24 crc kubenswrapper[4796]: E1212 05:32:24.177872 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7\": container with ID starting with ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7 not found: ID does not exist" containerID="ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7" Dec 12 05:32:24 crc kubenswrapper[4796]: I1212 05:32:24.177928 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7"} err="failed to get container status \"ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7\": rpc error: code = NotFound desc = could not find container \"ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7\": container with ID starting with ca6900337d0b4db9ff5c6297cff8ceed71f0f8196071d33445bf0bb0dd5237a7 not found: ID does not exist" Dec 12 05:32:25 crc kubenswrapper[4796]: I1212 05:32:25.421945 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" path="/var/lib/kubelet/pods/b483e241-0c95-491a-98f6-c04bf6829bde/volumes" Dec 12 05:33:02 crc kubenswrapper[4796]: I1212 05:33:02.970245 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:33:02 crc kubenswrapper[4796]: I1212 05:33:02.970829 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:33:32 crc kubenswrapper[4796]: I1212 05:33:32.969902 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:33:32 crc kubenswrapper[4796]: I1212 05:33:32.971585 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:34:02 crc kubenswrapper[4796]: I1212 05:34:02.969524 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:34:02 crc kubenswrapper[4796]: I1212 05:34:02.970076 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:34:02 crc kubenswrapper[4796]: I1212 05:34:02.970158 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:34:02 crc kubenswrapper[4796]: I1212 05:34:02.971114 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29b3c9440712942defcce77886caa07d4c33b8cae6f4614f02bc0a8de7d5d1aa"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:34:02 crc kubenswrapper[4796]: I1212 05:34:02.971177 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://29b3c9440712942defcce77886caa07d4c33b8cae6f4614f02bc0a8de7d5d1aa" gracePeriod=600 Dec 12 05:34:03 crc kubenswrapper[4796]: I1212 05:34:03.965479 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="29b3c9440712942defcce77886caa07d4c33b8cae6f4614f02bc0a8de7d5d1aa" exitCode=0 Dec 12 05:34:03 crc kubenswrapper[4796]: I1212 05:34:03.965934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"29b3c9440712942defcce77886caa07d4c33b8cae6f4614f02bc0a8de7d5d1aa"} Dec 12 05:34:03 crc kubenswrapper[4796]: I1212 05:34:03.965959 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018"} Dec 12 05:34:03 crc kubenswrapper[4796]: I1212 05:34:03.965976 4796 scope.go:117] "RemoveContainer" containerID="450e2cd18d319e787a2546e0a7e48d29007c8f05a288627bad097e5b5634a1f4" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.027637 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bghs6"] Dec 12 05:35:03 crc kubenswrapper[4796]: E1212 05:35:03.028627 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="extract-content" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.028644 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="extract-content" Dec 12 05:35:03 crc kubenswrapper[4796]: E1212 05:35:03.028672 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="extract-utilities" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.028681 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="extract-utilities" Dec 12 05:35:03 crc kubenswrapper[4796]: E1212 05:35:03.028711 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="registry-server" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.028719 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="registry-server" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.028932 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b483e241-0c95-491a-98f6-c04bf6829bde" containerName="registry-server" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.030612 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.041864 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghs6"] Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.080881 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-catalog-content\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.080950 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-utilities\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.081009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npwqn\" (UniqueName: \"kubernetes.io/projected/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-kube-api-access-npwqn\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.182544 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npwqn\" (UniqueName: \"kubernetes.io/projected/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-kube-api-access-npwqn\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.182736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-catalog-content\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.182780 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-utilities\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.183320 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-catalog-content\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.183458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-utilities\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.210515 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npwqn\" (UniqueName: \"kubernetes.io/projected/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-kube-api-access-npwqn\") pod \"redhat-marketplace-bghs6\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.355037 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:03 crc kubenswrapper[4796]: I1212 05:35:03.904368 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghs6"] Dec 12 05:35:04 crc kubenswrapper[4796]: I1212 05:35:04.549056 4796 generic.go:334] "Generic (PLEG): container finished" podID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerID="3894434eb614799d4a131834301ee70ede7f8416c649520f17ff1377750038db" exitCode=0 Dec 12 05:35:04 crc kubenswrapper[4796]: I1212 05:35:04.549121 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerDied","Data":"3894434eb614799d4a131834301ee70ede7f8416c649520f17ff1377750038db"} Dec 12 05:35:04 crc kubenswrapper[4796]: I1212 05:35:04.549167 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerStarted","Data":"0ba1e8cd52985e212df4fae31f7518adedbbcf0160d83df2748b2020ff7eb2c7"} Dec 12 05:35:05 crc kubenswrapper[4796]: I1212 05:35:05.562020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerStarted","Data":"239e925d16a59e4141a8145f9c19f2b26829947683a8864556635c2ac5b3a530"} Dec 12 05:35:06 crc kubenswrapper[4796]: I1212 05:35:06.571935 4796 generic.go:334] "Generic (PLEG): container finished" podID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerID="239e925d16a59e4141a8145f9c19f2b26829947683a8864556635c2ac5b3a530" exitCode=0 Dec 12 05:35:06 crc kubenswrapper[4796]: I1212 05:35:06.572129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerDied","Data":"239e925d16a59e4141a8145f9c19f2b26829947683a8864556635c2ac5b3a530"} Dec 12 05:35:07 crc kubenswrapper[4796]: I1212 05:35:07.592546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerStarted","Data":"38f9a2e783d22a7bd52c2740410e3efd27570698061b156f47aa1ede8a03ad61"} Dec 12 05:35:07 crc kubenswrapper[4796]: I1212 05:35:07.615409 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bghs6" podStartSLOduration=2.001093108 podStartE2EDuration="4.615391626s" podCreationTimestamp="2025-12-12 05:35:03 +0000 UTC" firstStartedPulling="2025-12-12 05:35:04.551512947 +0000 UTC m=+3695.427530094" lastFinishedPulling="2025-12-12 05:35:07.165811465 +0000 UTC m=+3698.041828612" observedRunningTime="2025-12-12 05:35:07.606989512 +0000 UTC m=+3698.483006669" watchObservedRunningTime="2025-12-12 05:35:07.615391626 +0000 UTC m=+3698.491408773" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.416222 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ndxq"] Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.419082 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.427294 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ndxq"] Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.531404 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-utilities\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.531613 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-catalog-content\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.531760 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmqb5\" (UniqueName: \"kubernetes.io/projected/737eb7c9-2db7-4fbc-804c-099903977ef5-kube-api-access-vmqb5\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.633900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-catalog-content\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.634039 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmqb5\" (UniqueName: \"kubernetes.io/projected/737eb7c9-2db7-4fbc-804c-099903977ef5-kube-api-access-vmqb5\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.634112 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-utilities\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.634463 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-catalog-content\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.634534 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-utilities\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.699919 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmqb5\" (UniqueName: \"kubernetes.io/projected/737eb7c9-2db7-4fbc-804c-099903977ef5-kube-api-access-vmqb5\") pod \"redhat-operators-4ndxq\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:10 crc kubenswrapper[4796]: I1212 05:35:10.749988 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:11 crc kubenswrapper[4796]: I1212 05:35:11.310874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ndxq"] Dec 12 05:35:11 crc kubenswrapper[4796]: I1212 05:35:11.625170 4796 generic.go:334] "Generic (PLEG): container finished" podID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerID="951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400" exitCode=0 Dec 12 05:35:11 crc kubenswrapper[4796]: I1212 05:35:11.625861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerDied","Data":"951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400"} Dec 12 05:35:11 crc kubenswrapper[4796]: I1212 05:35:11.625914 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerStarted","Data":"e555a0a007ac3681b38d82a6f824434f0af2761c1a39493fff975b132e3a9062"} Dec 12 05:35:12 crc kubenswrapper[4796]: I1212 05:35:12.698921 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerStarted","Data":"f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08"} Dec 12 05:35:13 crc kubenswrapper[4796]: I1212 05:35:13.355174 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:13 crc kubenswrapper[4796]: I1212 05:35:13.356464 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:13 crc kubenswrapper[4796]: I1212 05:35:13.422961 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:13 crc kubenswrapper[4796]: I1212 05:35:13.751238 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:15 crc kubenswrapper[4796]: I1212 05:35:15.594448 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghs6"] Dec 12 05:35:15 crc kubenswrapper[4796]: I1212 05:35:15.964764 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bghs6" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="registry-server" containerID="cri-o://38f9a2e783d22a7bd52c2740410e3efd27570698061b156f47aa1ede8a03ad61" gracePeriod=2 Dec 12 05:35:16 crc kubenswrapper[4796]: I1212 05:35:16.978816 4796 generic.go:334] "Generic (PLEG): container finished" podID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerID="f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08" exitCode=0 Dec 12 05:35:16 crc kubenswrapper[4796]: I1212 05:35:16.978860 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerDied","Data":"f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08"} Dec 12 05:35:17 crc kubenswrapper[4796]: I1212 05:35:17.996360 4796 generic.go:334] "Generic (PLEG): container finished" podID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerID="38f9a2e783d22a7bd52c2740410e3efd27570698061b156f47aa1ede8a03ad61" exitCode=0 Dec 12 05:35:17 crc kubenswrapper[4796]: I1212 05:35:17.996439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerDied","Data":"38f9a2e783d22a7bd52c2740410e3efd27570698061b156f47aa1ede8a03ad61"} Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.398612 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.504801 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npwqn\" (UniqueName: \"kubernetes.io/projected/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-kube-api-access-npwqn\") pod \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.505342 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-utilities\") pod \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.505385 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-catalog-content\") pod \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\" (UID: \"a845dfb6-8e0f-4f26-a276-3ae4adee4b19\") " Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.505882 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-utilities" (OuterVolumeSpecName: "utilities") pod "a845dfb6-8e0f-4f26-a276-3ae4adee4b19" (UID: "a845dfb6-8e0f-4f26-a276-3ae4adee4b19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.506443 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.511008 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-kube-api-access-npwqn" (OuterVolumeSpecName: "kube-api-access-npwqn") pod "a845dfb6-8e0f-4f26-a276-3ae4adee4b19" (UID: "a845dfb6-8e0f-4f26-a276-3ae4adee4b19"). InnerVolumeSpecName "kube-api-access-npwqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.524321 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a845dfb6-8e0f-4f26-a276-3ae4adee4b19" (UID: "a845dfb6-8e0f-4f26-a276-3ae4adee4b19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.608152 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npwqn\" (UniqueName: \"kubernetes.io/projected/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-kube-api-access-npwqn\") on node \"crc\" DevicePath \"\"" Dec 12 05:35:18 crc kubenswrapper[4796]: I1212 05:35:18.608188 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a845dfb6-8e0f-4f26-a276-3ae4adee4b19-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.014904 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerStarted","Data":"6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001"} Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.020966 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghs6" event={"ID":"a845dfb6-8e0f-4f26-a276-3ae4adee4b19","Type":"ContainerDied","Data":"0ba1e8cd52985e212df4fae31f7518adedbbcf0160d83df2748b2020ff7eb2c7"} Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.021040 4796 scope.go:117] "RemoveContainer" containerID="38f9a2e783d22a7bd52c2740410e3efd27570698061b156f47aa1ede8a03ad61" Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.021253 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghs6" Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.045051 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ndxq" podStartSLOduration=2.859996765 podStartE2EDuration="9.045024394s" podCreationTimestamp="2025-12-12 05:35:10 +0000 UTC" firstStartedPulling="2025-12-12 05:35:11.626954462 +0000 UTC m=+3702.502971609" lastFinishedPulling="2025-12-12 05:35:17.811982091 +0000 UTC m=+3708.687999238" observedRunningTime="2025-12-12 05:35:19.035018319 +0000 UTC m=+3709.911035486" watchObservedRunningTime="2025-12-12 05:35:19.045024394 +0000 UTC m=+3709.921041541" Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.073400 4796 scope.go:117] "RemoveContainer" containerID="239e925d16a59e4141a8145f9c19f2b26829947683a8864556635c2ac5b3a530" Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.074190 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghs6"] Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.091803 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghs6"] Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.123563 4796 scope.go:117] "RemoveContainer" containerID="3894434eb614799d4a131834301ee70ede7f8416c649520f17ff1377750038db" Dec 12 05:35:19 crc kubenswrapper[4796]: I1212 05:35:19.426697 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" path="/var/lib/kubelet/pods/a845dfb6-8e0f-4f26-a276-3ae4adee4b19/volumes" Dec 12 05:35:20 crc kubenswrapper[4796]: I1212 05:35:20.750859 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:20 crc kubenswrapper[4796]: I1212 05:35:20.751219 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:21 crc kubenswrapper[4796]: I1212 05:35:21.808248 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4ndxq" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="registry-server" probeResult="failure" output=< Dec 12 05:35:21 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 05:35:21 crc kubenswrapper[4796]: > Dec 12 05:35:30 crc kubenswrapper[4796]: I1212 05:35:30.846836 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:30 crc kubenswrapper[4796]: I1212 05:35:30.898478 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:31 crc kubenswrapper[4796]: I1212 05:35:31.084305 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ndxq"] Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.135444 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ndxq" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="registry-server" containerID="cri-o://6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001" gracePeriod=2 Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.812501 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.894463 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmqb5\" (UniqueName: \"kubernetes.io/projected/737eb7c9-2db7-4fbc-804c-099903977ef5-kube-api-access-vmqb5\") pod \"737eb7c9-2db7-4fbc-804c-099903977ef5\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.897178 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-utilities\") pod \"737eb7c9-2db7-4fbc-804c-099903977ef5\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.897224 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-catalog-content\") pod \"737eb7c9-2db7-4fbc-804c-099903977ef5\" (UID: \"737eb7c9-2db7-4fbc-804c-099903977ef5\") " Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.897741 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-utilities" (OuterVolumeSpecName: "utilities") pod "737eb7c9-2db7-4fbc-804c-099903977ef5" (UID: "737eb7c9-2db7-4fbc-804c-099903977ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.898105 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.905582 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737eb7c9-2db7-4fbc-804c-099903977ef5-kube-api-access-vmqb5" (OuterVolumeSpecName: "kube-api-access-vmqb5") pod "737eb7c9-2db7-4fbc-804c-099903977ef5" (UID: "737eb7c9-2db7-4fbc-804c-099903977ef5"). InnerVolumeSpecName "kube-api-access-vmqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:35:32 crc kubenswrapper[4796]: I1212 05:35:32.999846 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmqb5\" (UniqueName: \"kubernetes.io/projected/737eb7c9-2db7-4fbc-804c-099903977ef5-kube-api-access-vmqb5\") on node \"crc\" DevicePath \"\"" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.027800 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "737eb7c9-2db7-4fbc-804c-099903977ef5" (UID: "737eb7c9-2db7-4fbc-804c-099903977ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.101391 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737eb7c9-2db7-4fbc-804c-099903977ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.146536 4796 generic.go:334] "Generic (PLEG): container finished" podID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerID="6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001" exitCode=0 Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.146603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerDied","Data":"6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001"} Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.147413 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ndxq" event={"ID":"737eb7c9-2db7-4fbc-804c-099903977ef5","Type":"ContainerDied","Data":"e555a0a007ac3681b38d82a6f824434f0af2761c1a39493fff975b132e3a9062"} Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.146654 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ndxq" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.147465 4796 scope.go:117] "RemoveContainer" containerID="6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.175292 4796 scope.go:117] "RemoveContainer" containerID="f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.180219 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ndxq"] Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.188668 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ndxq"] Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.209266 4796 scope.go:117] "RemoveContainer" containerID="951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.237520 4796 scope.go:117] "RemoveContainer" containerID="6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001" Dec 12 05:35:33 crc kubenswrapper[4796]: E1212 05:35:33.238619 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001\": container with ID starting with 6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001 not found: ID does not exist" containerID="6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.238660 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001"} err="failed to get container status \"6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001\": rpc error: code = NotFound desc = could not find container \"6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001\": container with ID starting with 6d573c01813e3dc12eecb48211fcd3836776b27262c48dc78e444d048988f001 not found: ID does not exist" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.238686 4796 scope.go:117] "RemoveContainer" containerID="f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08" Dec 12 05:35:33 crc kubenswrapper[4796]: E1212 05:35:33.239122 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08\": container with ID starting with f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08 not found: ID does not exist" containerID="f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.239151 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08"} err="failed to get container status \"f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08\": rpc error: code = NotFound desc = could not find container \"f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08\": container with ID starting with f12fbb8d6b9f3c39aba047d9059652e2150bf4ec9e7e29e14afe3c87dc064b08 not found: ID does not exist" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.239170 4796 scope.go:117] "RemoveContainer" containerID="951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400" Dec 12 05:35:33 crc kubenswrapper[4796]: E1212 05:35:33.239722 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400\": container with ID starting with 951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400 not found: ID does not exist" containerID="951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.239763 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400"} err="failed to get container status \"951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400\": rpc error: code = NotFound desc = could not find container \"951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400\": container with ID starting with 951a4e3f0160693114a4d90be87dd2886784b9efa26a08976a600b733deed400 not found: ID does not exist" Dec 12 05:35:33 crc kubenswrapper[4796]: I1212 05:35:33.423111 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" path="/var/lib/kubelet/pods/737eb7c9-2db7-4fbc-804c-099903977ef5/volumes" Dec 12 05:36:32 crc kubenswrapper[4796]: I1212 05:36:32.969541 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:36:32 crc kubenswrapper[4796]: I1212 05:36:32.970180 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:37:02 crc kubenswrapper[4796]: I1212 05:37:02.970333 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:37:02 crc kubenswrapper[4796]: I1212 05:37:02.971784 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:37:32 crc kubenswrapper[4796]: I1212 05:37:32.970054 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:37:32 crc kubenswrapper[4796]: I1212 05:37:32.970539 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:37:32 crc kubenswrapper[4796]: I1212 05:37:32.970583 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:37:32 crc kubenswrapper[4796]: I1212 05:37:32.971243 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:37:32 crc kubenswrapper[4796]: I1212 05:37:32.971301 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" gracePeriod=600 Dec 12 05:37:33 crc kubenswrapper[4796]: I1212 05:37:33.369517 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" exitCode=0 Dec 12 05:37:33 crc kubenswrapper[4796]: I1212 05:37:33.369858 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018"} Dec 12 05:37:33 crc kubenswrapper[4796]: I1212 05:37:33.369976 4796 scope.go:117] "RemoveContainer" containerID="29b3c9440712942defcce77886caa07d4c33b8cae6f4614f02bc0a8de7d5d1aa" Dec 12 05:37:33 crc kubenswrapper[4796]: E1212 05:37:33.421605 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:37:34 crc kubenswrapper[4796]: I1212 05:37:34.381218 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:37:34 crc kubenswrapper[4796]: E1212 05:37:34.381840 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:37:49 crc kubenswrapper[4796]: I1212 05:37:49.419271 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:37:49 crc kubenswrapper[4796]: E1212 05:37:49.420224 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:38:02 crc kubenswrapper[4796]: I1212 05:38:02.411581 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:38:02 crc kubenswrapper[4796]: E1212 05:38:02.412162 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.674880 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-skvtj"] Dec 12 05:38:13 crc kubenswrapper[4796]: E1212 05:38:13.675969 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="extract-utilities" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.675993 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="extract-utilities" Dec 12 05:38:13 crc kubenswrapper[4796]: E1212 05:38:13.676009 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="registry-server" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676017 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="registry-server" Dec 12 05:38:13 crc kubenswrapper[4796]: E1212 05:38:13.676034 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="extract-content" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676041 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="extract-content" Dec 12 05:38:13 crc kubenswrapper[4796]: E1212 05:38:13.676054 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="extract-utilities" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676061 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="extract-utilities" Dec 12 05:38:13 crc kubenswrapper[4796]: E1212 05:38:13.676095 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="registry-server" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676104 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="registry-server" Dec 12 05:38:13 crc kubenswrapper[4796]: E1212 05:38:13.676119 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="extract-content" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676126 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="extract-content" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676381 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a845dfb6-8e0f-4f26-a276-3ae4adee4b19" containerName="registry-server" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.676399 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="737eb7c9-2db7-4fbc-804c-099903977ef5" containerName="registry-server" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.678072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.710911 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skvtj"] Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.820967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngmj4\" (UniqueName: \"kubernetes.io/projected/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-kube-api-access-ngmj4\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.821221 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-utilities\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.821369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-catalog-content\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.923131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-catalog-content\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.923339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngmj4\" (UniqueName: \"kubernetes.io/projected/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-kube-api-access-ngmj4\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.923386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-utilities\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.923664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-catalog-content\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.923844 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-utilities\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:13 crc kubenswrapper[4796]: I1212 05:38:13.949324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngmj4\" (UniqueName: \"kubernetes.io/projected/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-kube-api-access-ngmj4\") pod \"community-operators-skvtj\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:14 crc kubenswrapper[4796]: I1212 05:38:14.010707 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:14 crc kubenswrapper[4796]: I1212 05:38:14.627027 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skvtj"] Dec 12 05:38:14 crc kubenswrapper[4796]: I1212 05:38:14.747658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerStarted","Data":"ccc5ba55bb7188c769bfbf1eb2ba13539117031a555c55ce302631918ac2af79"} Dec 12 05:38:15 crc kubenswrapper[4796]: I1212 05:38:15.413532 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:38:15 crc kubenswrapper[4796]: E1212 05:38:15.414080 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:38:15 crc kubenswrapper[4796]: I1212 05:38:15.758336 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerID="fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c" exitCode=0 Dec 12 05:38:15 crc kubenswrapper[4796]: I1212 05:38:15.758381 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerDied","Data":"fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c"} Dec 12 05:38:15 crc kubenswrapper[4796]: I1212 05:38:15.761353 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:38:17 crc kubenswrapper[4796]: I1212 05:38:17.778222 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerStarted","Data":"6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67"} Dec 12 05:38:18 crc kubenswrapper[4796]: I1212 05:38:18.793966 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerID="6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67" exitCode=0 Dec 12 05:38:18 crc kubenswrapper[4796]: I1212 05:38:18.794046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerDied","Data":"6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67"} Dec 12 05:38:20 crc kubenswrapper[4796]: I1212 05:38:20.813318 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerStarted","Data":"f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938"} Dec 12 05:38:20 crc kubenswrapper[4796]: I1212 05:38:20.835141 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-skvtj" podStartSLOduration=3.741666231 podStartE2EDuration="7.835123856s" podCreationTimestamp="2025-12-12 05:38:13 +0000 UTC" firstStartedPulling="2025-12-12 05:38:15.761122424 +0000 UTC m=+3886.637139571" lastFinishedPulling="2025-12-12 05:38:19.854580049 +0000 UTC m=+3890.730597196" observedRunningTime="2025-12-12 05:38:20.832407421 +0000 UTC m=+3891.708424578" watchObservedRunningTime="2025-12-12 05:38:20.835123856 +0000 UTC m=+3891.711141003" Dec 12 05:38:24 crc kubenswrapper[4796]: I1212 05:38:24.011942 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:24 crc kubenswrapper[4796]: I1212 05:38:24.013027 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:24 crc kubenswrapper[4796]: I1212 05:38:24.131055 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:30 crc kubenswrapper[4796]: I1212 05:38:30.411347 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:38:30 crc kubenswrapper[4796]: E1212 05:38:30.412081 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:38:34 crc kubenswrapper[4796]: I1212 05:38:34.126236 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:34 crc kubenswrapper[4796]: I1212 05:38:34.184644 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skvtj"] Dec 12 05:38:34 crc kubenswrapper[4796]: I1212 05:38:34.932507 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-skvtj" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="registry-server" containerID="cri-o://f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938" gracePeriod=2 Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.562753 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.707182 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-catalog-content\") pod \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.707523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-utilities\") pod \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.707640 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngmj4\" (UniqueName: \"kubernetes.io/projected/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-kube-api-access-ngmj4\") pod \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\" (UID: \"7c4367e7-4dbd-4831-9b7b-edf3f91b1660\") " Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.709880 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-utilities" (OuterVolumeSpecName: "utilities") pod "7c4367e7-4dbd-4831-9b7b-edf3f91b1660" (UID: "7c4367e7-4dbd-4831-9b7b-edf3f91b1660"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.717401 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-kube-api-access-ngmj4" (OuterVolumeSpecName: "kube-api-access-ngmj4") pod "7c4367e7-4dbd-4831-9b7b-edf3f91b1660" (UID: "7c4367e7-4dbd-4831-9b7b-edf3f91b1660"). InnerVolumeSpecName "kube-api-access-ngmj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.780921 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c4367e7-4dbd-4831-9b7b-edf3f91b1660" (UID: "7c4367e7-4dbd-4831-9b7b-edf3f91b1660"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.809478 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngmj4\" (UniqueName: \"kubernetes.io/projected/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-kube-api-access-ngmj4\") on node \"crc\" DevicePath \"\"" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.809693 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.809757 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c4367e7-4dbd-4831-9b7b-edf3f91b1660-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.946371 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerID="f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938" exitCode=0 Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.946507 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skvtj" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.946516 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerDied","Data":"f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938"} Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.946855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skvtj" event={"ID":"7c4367e7-4dbd-4831-9b7b-edf3f91b1660","Type":"ContainerDied","Data":"ccc5ba55bb7188c769bfbf1eb2ba13539117031a555c55ce302631918ac2af79"} Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.946890 4796 scope.go:117] "RemoveContainer" containerID="f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938" Dec 12 05:38:35 crc kubenswrapper[4796]: I1212 05:38:35.971549 4796 scope.go:117] "RemoveContainer" containerID="6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:35.999969 4796 scope.go:117] "RemoveContainer" containerID="fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.003491 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skvtj"] Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.011741 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-skvtj"] Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.044443 4796 scope.go:117] "RemoveContainer" containerID="f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938" Dec 12 05:38:36 crc kubenswrapper[4796]: E1212 05:38:36.044867 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938\": container with ID starting with f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938 not found: ID does not exist" containerID="f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.044906 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938"} err="failed to get container status \"f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938\": rpc error: code = NotFound desc = could not find container \"f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938\": container with ID starting with f836f65899901307ecce2d31c40c4b9c3a7ac4e095a838bfea0a4f4622ae2938 not found: ID does not exist" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.044932 4796 scope.go:117] "RemoveContainer" containerID="6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67" Dec 12 05:38:36 crc kubenswrapper[4796]: E1212 05:38:36.045264 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67\": container with ID starting with 6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67 not found: ID does not exist" containerID="6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.045305 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67"} err="failed to get container status \"6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67\": rpc error: code = NotFound desc = could not find container \"6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67\": container with ID starting with 6812fabf0ba2a28def5b484ea7e547a8a7d32b989e0d307cf64dcd3e3ac57b67 not found: ID does not exist" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.045326 4796 scope.go:117] "RemoveContainer" containerID="fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c" Dec 12 05:38:36 crc kubenswrapper[4796]: E1212 05:38:36.045630 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c\": container with ID starting with fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c not found: ID does not exist" containerID="fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c" Dec 12 05:38:36 crc kubenswrapper[4796]: I1212 05:38:36.045655 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c"} err="failed to get container status \"fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c\": rpc error: code = NotFound desc = could not find container \"fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c\": container with ID starting with fea16b40f49a414f0e99bf1ae28fa0acee90a915d8051bcb186b1db23e96fb8c not found: ID does not exist" Dec 12 05:38:37 crc kubenswrapper[4796]: I1212 05:38:37.423927 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" path="/var/lib/kubelet/pods/7c4367e7-4dbd-4831-9b7b-edf3f91b1660/volumes" Dec 12 05:38:45 crc kubenswrapper[4796]: I1212 05:38:45.411757 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:38:45 crc kubenswrapper[4796]: E1212 05:38:45.413602 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:38:56 crc kubenswrapper[4796]: I1212 05:38:56.411678 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:38:56 crc kubenswrapper[4796]: E1212 05:38:56.412790 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:39:08 crc kubenswrapper[4796]: I1212 05:39:08.411238 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:39:08 crc kubenswrapper[4796]: E1212 05:39:08.411906 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:39:20 crc kubenswrapper[4796]: I1212 05:39:20.411464 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:39:20 crc kubenswrapper[4796]: E1212 05:39:20.412150 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:39:34 crc kubenswrapper[4796]: I1212 05:39:34.412015 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:39:34 crc kubenswrapper[4796]: E1212 05:39:34.412878 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:39:49 crc kubenswrapper[4796]: I1212 05:39:49.419543 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:39:49 crc kubenswrapper[4796]: E1212 05:39:49.420257 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:40:02 crc kubenswrapper[4796]: I1212 05:40:02.411485 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:40:02 crc kubenswrapper[4796]: E1212 05:40:02.412231 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:40:16 crc kubenswrapper[4796]: I1212 05:40:16.411944 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:40:16 crc kubenswrapper[4796]: E1212 05:40:16.412758 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:40:27 crc kubenswrapper[4796]: I1212 05:40:27.411600 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:40:27 crc kubenswrapper[4796]: E1212 05:40:27.412437 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:40:42 crc kubenswrapper[4796]: I1212 05:40:42.411726 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:40:42 crc kubenswrapper[4796]: E1212 05:40:42.412292 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:40:53 crc kubenswrapper[4796]: I1212 05:40:53.411054 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:40:53 crc kubenswrapper[4796]: E1212 05:40:53.411685 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:41:06 crc kubenswrapper[4796]: I1212 05:41:06.410968 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:41:06 crc kubenswrapper[4796]: E1212 05:41:06.411838 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:41:19 crc kubenswrapper[4796]: I1212 05:41:19.420788 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:41:19 crc kubenswrapper[4796]: E1212 05:41:19.441012 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:41:30 crc kubenswrapper[4796]: I1212 05:41:30.411665 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:41:30 crc kubenswrapper[4796]: E1212 05:41:30.412615 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:41:43 crc kubenswrapper[4796]: I1212 05:41:43.410872 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:41:43 crc kubenswrapper[4796]: E1212 05:41:43.413762 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:41:57 crc kubenswrapper[4796]: I1212 05:41:57.412614 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:41:57 crc kubenswrapper[4796]: E1212 05:41:57.413794 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:42:10 crc kubenswrapper[4796]: I1212 05:42:10.411445 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:42:10 crc kubenswrapper[4796]: E1212 05:42:10.412297 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:42:22 crc kubenswrapper[4796]: I1212 05:42:22.986798 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bpmjn"] Dec 12 05:42:22 crc kubenswrapper[4796]: E1212 05:42:22.987696 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="extract-utilities" Dec 12 05:42:22 crc kubenswrapper[4796]: I1212 05:42:22.987709 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="extract-utilities" Dec 12 05:42:22 crc kubenswrapper[4796]: E1212 05:42:22.987723 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="registry-server" Dec 12 05:42:22 crc kubenswrapper[4796]: I1212 05:42:22.987729 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="registry-server" Dec 12 05:42:22 crc kubenswrapper[4796]: E1212 05:42:22.987755 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="extract-content" Dec 12 05:42:22 crc kubenswrapper[4796]: I1212 05:42:22.987761 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="extract-content" Dec 12 05:42:22 crc kubenswrapper[4796]: I1212 05:42:22.987936 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4367e7-4dbd-4831-9b7b-edf3f91b1660" containerName="registry-server" Dec 12 05:42:22 crc kubenswrapper[4796]: I1212 05:42:22.989404 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.008323 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpmjn"] Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.150213 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-catalog-content\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.150658 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn2t\" (UniqueName: \"kubernetes.io/projected/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-kube-api-access-vmn2t\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.150870 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-utilities\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.251675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmn2t\" (UniqueName: \"kubernetes.io/projected/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-kube-api-access-vmn2t\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.251763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-utilities\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.251837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-catalog-content\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.252408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-catalog-content\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.252421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-utilities\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.278207 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmn2t\" (UniqueName: \"kubernetes.io/projected/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-kube-api-access-vmn2t\") pod \"certified-operators-bpmjn\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.324583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:23 crc kubenswrapper[4796]: I1212 05:42:23.935698 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpmjn"] Dec 12 05:42:24 crc kubenswrapper[4796]: I1212 05:42:24.083995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerStarted","Data":"de5643162ce4617b176a8e7e4a7f0083615940d146d1851ca57afcba2ca63f93"} Dec 12 05:42:25 crc kubenswrapper[4796]: I1212 05:42:25.095607 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerID="96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518" exitCode=0 Dec 12 05:42:25 crc kubenswrapper[4796]: I1212 05:42:25.095705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerDied","Data":"96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518"} Dec 12 05:42:25 crc kubenswrapper[4796]: I1212 05:42:25.411687 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:42:25 crc kubenswrapper[4796]: E1212 05:42:25.411911 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:42:26 crc kubenswrapper[4796]: I1212 05:42:26.108404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerStarted","Data":"b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f"} Dec 12 05:42:27 crc kubenswrapper[4796]: I1212 05:42:27.118056 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerID="b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f" exitCode=0 Dec 12 05:42:27 crc kubenswrapper[4796]: I1212 05:42:27.118105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerDied","Data":"b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f"} Dec 12 05:42:28 crc kubenswrapper[4796]: I1212 05:42:28.131010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerStarted","Data":"303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f"} Dec 12 05:42:28 crc kubenswrapper[4796]: I1212 05:42:28.151823 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bpmjn" podStartSLOduration=3.725473139 podStartE2EDuration="6.15180717s" podCreationTimestamp="2025-12-12 05:42:22 +0000 UTC" firstStartedPulling="2025-12-12 05:42:25.098494319 +0000 UTC m=+4135.974511456" lastFinishedPulling="2025-12-12 05:42:27.52482834 +0000 UTC m=+4138.400845487" observedRunningTime="2025-12-12 05:42:28.151104828 +0000 UTC m=+4139.027121995" watchObservedRunningTime="2025-12-12 05:42:28.15180717 +0000 UTC m=+4139.027824317" Dec 12 05:42:33 crc kubenswrapper[4796]: I1212 05:42:33.324738 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:33 crc kubenswrapper[4796]: I1212 05:42:33.325202 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:33 crc kubenswrapper[4796]: I1212 05:42:33.375226 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:34 crc kubenswrapper[4796]: I1212 05:42:34.257969 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:34 crc kubenswrapper[4796]: I1212 05:42:34.394549 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bpmjn"] Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.203919 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bpmjn" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="registry-server" containerID="cri-o://303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f" gracePeriod=2 Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.733167 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.837230 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-catalog-content\") pod \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.837507 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn2t\" (UniqueName: \"kubernetes.io/projected/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-kube-api-access-vmn2t\") pod \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.837546 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-utilities\") pod \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\" (UID: \"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932\") " Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.839209 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-utilities" (OuterVolumeSpecName: "utilities") pod "d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" (UID: "d1c38d7e-783d-455f-9b5f-ae8f3e4b6932"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.852003 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-kube-api-access-vmn2t" (OuterVolumeSpecName: "kube-api-access-vmn2t") pod "d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" (UID: "d1c38d7e-783d-455f-9b5f-ae8f3e4b6932"). InnerVolumeSpecName "kube-api-access-vmn2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.902154 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" (UID: "d1c38d7e-783d-455f-9b5f-ae8f3e4b6932"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.940637 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmn2t\" (UniqueName: \"kubernetes.io/projected/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-kube-api-access-vmn2t\") on node \"crc\" DevicePath \"\"" Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.940894 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:42:36 crc kubenswrapper[4796]: I1212 05:42:36.940937 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.214243 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerID="303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f" exitCode=0 Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.214316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerDied","Data":"303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f"} Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.214344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpmjn" event={"ID":"d1c38d7e-783d-455f-9b5f-ae8f3e4b6932","Type":"ContainerDied","Data":"de5643162ce4617b176a8e7e4a7f0083615940d146d1851ca57afcba2ca63f93"} Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.214366 4796 scope.go:117] "RemoveContainer" containerID="303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.216356 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpmjn" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.241101 4796 scope.go:117] "RemoveContainer" containerID="b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.268478 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bpmjn"] Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.284196 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bpmjn"] Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.289188 4796 scope.go:117] "RemoveContainer" containerID="96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.328536 4796 scope.go:117] "RemoveContainer" containerID="303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f" Dec 12 05:42:37 crc kubenswrapper[4796]: E1212 05:42:37.329022 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f\": container with ID starting with 303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f not found: ID does not exist" containerID="303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.329103 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f"} err="failed to get container status \"303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f\": rpc error: code = NotFound desc = could not find container \"303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f\": container with ID starting with 303a4f6e5cd378d8dd970457f85685db3fe7bde40481f5945ee2c65f5562711f not found: ID does not exist" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.329139 4796 scope.go:117] "RemoveContainer" containerID="b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f" Dec 12 05:42:37 crc kubenswrapper[4796]: E1212 05:42:37.329570 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f\": container with ID starting with b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f not found: ID does not exist" containerID="b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.329607 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f"} err="failed to get container status \"b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f\": rpc error: code = NotFound desc = could not find container \"b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f\": container with ID starting with b24e1521154c6401931cb4f9a81d1866709ea9ea7077f94e0be251fdf55c798f not found: ID does not exist" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.329631 4796 scope.go:117] "RemoveContainer" containerID="96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518" Dec 12 05:42:37 crc kubenswrapper[4796]: E1212 05:42:37.329916 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518\": container with ID starting with 96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518 not found: ID does not exist" containerID="96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.329968 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518"} err="failed to get container status \"96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518\": rpc error: code = NotFound desc = could not find container \"96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518\": container with ID starting with 96eb044d6a0a7ace146bb2933a1c51f98e9868cb55071fa965967182d28a0518 not found: ID does not exist" Dec 12 05:42:37 crc kubenswrapper[4796]: I1212 05:42:37.422517 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" path="/var/lib/kubelet/pods/d1c38d7e-783d-455f-9b5f-ae8f3e4b6932/volumes" Dec 12 05:42:40 crc kubenswrapper[4796]: I1212 05:42:40.412396 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:42:41 crc kubenswrapper[4796]: I1212 05:42:41.263778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"6cfd5f0abab7e1b9b74d6d43adf1c520251ed6b7b5b267ad9722c2e646315aef"} Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.186750 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws"] Dec 12 05:45:00 crc kubenswrapper[4796]: E1212 05:45:00.187938 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="registry-server" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.187959 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="registry-server" Dec 12 05:45:00 crc kubenswrapper[4796]: E1212 05:45:00.187975 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="extract-content" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.187983 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="extract-content" Dec 12 05:45:00 crc kubenswrapper[4796]: E1212 05:45:00.187999 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="extract-utilities" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.188007 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="extract-utilities" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.188271 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c38d7e-783d-455f-9b5f-ae8f3e4b6932" containerName="registry-server" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.189159 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.197136 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.197484 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.208423 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws"] Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.224531 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b42eafd2-c57d-45db-86a0-d55373b81ff3-config-volume\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.225017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b42eafd2-c57d-45db-86a0-d55373b81ff3-secret-volume\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.225223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndgx\" (UniqueName: \"kubernetes.io/projected/b42eafd2-c57d-45db-86a0-d55373b81ff3-kube-api-access-4ndgx\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.327219 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b42eafd2-c57d-45db-86a0-d55373b81ff3-secret-volume\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.327308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ndgx\" (UniqueName: \"kubernetes.io/projected/b42eafd2-c57d-45db-86a0-d55373b81ff3-kube-api-access-4ndgx\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.327366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b42eafd2-c57d-45db-86a0-d55373b81ff3-config-volume\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.328528 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b42eafd2-c57d-45db-86a0-d55373b81ff3-config-volume\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.342426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b42eafd2-c57d-45db-86a0-d55373b81ff3-secret-volume\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.346717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ndgx\" (UniqueName: \"kubernetes.io/projected/b42eafd2-c57d-45db-86a0-d55373b81ff3-kube-api-access-4ndgx\") pod \"collect-profiles-29425305-mmmws\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.511323 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:00 crc kubenswrapper[4796]: I1212 05:45:00.991392 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws"] Dec 12 05:45:01 crc kubenswrapper[4796]: W1212 05:45:01.387411 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb42eafd2_c57d_45db_86a0_d55373b81ff3.slice/crio-5ba76b70415289cfdb9c64e2c0070b41b4c408e26d14807fc59c3fc04917b0a5 WatchSource:0}: Error finding container 5ba76b70415289cfdb9c64e2c0070b41b4c408e26d14807fc59c3fc04917b0a5: Status 404 returned error can't find the container with id 5ba76b70415289cfdb9c64e2c0070b41b4c408e26d14807fc59c3fc04917b0a5 Dec 12 05:45:01 crc kubenswrapper[4796]: I1212 05:45:01.497607 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" event={"ID":"b42eafd2-c57d-45db-86a0-d55373b81ff3","Type":"ContainerStarted","Data":"5ba76b70415289cfdb9c64e2c0070b41b4c408e26d14807fc59c3fc04917b0a5"} Dec 12 05:45:02 crc kubenswrapper[4796]: I1212 05:45:02.506928 4796 generic.go:334] "Generic (PLEG): container finished" podID="b42eafd2-c57d-45db-86a0-d55373b81ff3" containerID="2917a2eebf72fd73efc9f2ae1db55aeb660c17d90e66d37e0c48e39ba577d792" exitCode=0 Dec 12 05:45:02 crc kubenswrapper[4796]: I1212 05:45:02.506991 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" event={"ID":"b42eafd2-c57d-45db-86a0-d55373b81ff3","Type":"ContainerDied","Data":"2917a2eebf72fd73efc9f2ae1db55aeb660c17d90e66d37e0c48e39ba577d792"} Dec 12 05:45:02 crc kubenswrapper[4796]: I1212 05:45:02.969935 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:45:02 crc kubenswrapper[4796]: I1212 05:45:02.970234 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:45:03 crc kubenswrapper[4796]: I1212 05:45:03.982903 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.032036 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b42eafd2-c57d-45db-86a0-d55373b81ff3-secret-volume\") pod \"b42eafd2-c57d-45db-86a0-d55373b81ff3\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.032148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b42eafd2-c57d-45db-86a0-d55373b81ff3-config-volume\") pod \"b42eafd2-c57d-45db-86a0-d55373b81ff3\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.032174 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ndgx\" (UniqueName: \"kubernetes.io/projected/b42eafd2-c57d-45db-86a0-d55373b81ff3-kube-api-access-4ndgx\") pod \"b42eafd2-c57d-45db-86a0-d55373b81ff3\" (UID: \"b42eafd2-c57d-45db-86a0-d55373b81ff3\") " Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.032894 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b42eafd2-c57d-45db-86a0-d55373b81ff3-config-volume" (OuterVolumeSpecName: "config-volume") pod "b42eafd2-c57d-45db-86a0-d55373b81ff3" (UID: "b42eafd2-c57d-45db-86a0-d55373b81ff3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.049701 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42eafd2-c57d-45db-86a0-d55373b81ff3-kube-api-access-4ndgx" (OuterVolumeSpecName: "kube-api-access-4ndgx") pod "b42eafd2-c57d-45db-86a0-d55373b81ff3" (UID: "b42eafd2-c57d-45db-86a0-d55373b81ff3"). InnerVolumeSpecName "kube-api-access-4ndgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.051730 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42eafd2-c57d-45db-86a0-d55373b81ff3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b42eafd2-c57d-45db-86a0-d55373b81ff3" (UID: "b42eafd2-c57d-45db-86a0-d55373b81ff3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.134160 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b42eafd2-c57d-45db-86a0-d55373b81ff3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.134190 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b42eafd2-c57d-45db-86a0-d55373b81ff3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.134202 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ndgx\" (UniqueName: \"kubernetes.io/projected/b42eafd2-c57d-45db-86a0-d55373b81ff3-kube-api-access-4ndgx\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.524349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" event={"ID":"b42eafd2-c57d-45db-86a0-d55373b81ff3","Type":"ContainerDied","Data":"5ba76b70415289cfdb9c64e2c0070b41b4c408e26d14807fc59c3fc04917b0a5"} Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.524388 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba76b70415289cfdb9c64e2c0070b41b4c408e26d14807fc59c3fc04917b0a5" Dec 12 05:45:04 crc kubenswrapper[4796]: I1212 05:45:04.524635 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425305-mmmws" Dec 12 05:45:05 crc kubenswrapper[4796]: I1212 05:45:05.066385 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs"] Dec 12 05:45:05 crc kubenswrapper[4796]: I1212 05:45:05.074268 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425260-tlrvs"] Dec 12 05:45:05 crc kubenswrapper[4796]: I1212 05:45:05.424615 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a89e4806-1860-412b-a6e6-358cc04c1bce" path="/var/lib/kubelet/pods/a89e4806-1860-412b-a6e6-358cc04c1bce/volumes" Dec 12 05:45:22 crc kubenswrapper[4796]: I1212 05:45:22.876252 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lktdf"] Dec 12 05:45:22 crc kubenswrapper[4796]: E1212 05:45:22.878331 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42eafd2-c57d-45db-86a0-d55373b81ff3" containerName="collect-profiles" Dec 12 05:45:22 crc kubenswrapper[4796]: I1212 05:45:22.878420 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42eafd2-c57d-45db-86a0-d55373b81ff3" containerName="collect-profiles" Dec 12 05:45:22 crc kubenswrapper[4796]: I1212 05:45:22.878788 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b42eafd2-c57d-45db-86a0-d55373b81ff3" containerName="collect-profiles" Dec 12 05:45:22 crc kubenswrapper[4796]: I1212 05:45:22.880516 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:22 crc kubenswrapper[4796]: I1212 05:45:22.900408 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lktdf"] Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.000082 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-catalog-content\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.000138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-utilities\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.000452 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87st\" (UniqueName: \"kubernetes.io/projected/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-kube-api-access-c87st\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.102680 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c87st\" (UniqueName: \"kubernetes.io/projected/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-kube-api-access-c87st\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.102931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-catalog-content\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.103018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-utilities\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.103435 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-catalog-content\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.103495 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-utilities\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.133001 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c87st\" (UniqueName: \"kubernetes.io/projected/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-kube-api-access-c87st\") pod \"redhat-operators-lktdf\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.202954 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.715706 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lktdf"] Dec 12 05:45:23 crc kubenswrapper[4796]: I1212 05:45:23.746147 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerStarted","Data":"01d729e69f79231f8464fc2e8e3ba47ec2e0bf596c73b9c663d9bb7ebb18a9f1"} Dec 12 05:45:24 crc kubenswrapper[4796]: I1212 05:45:24.757520 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerID="782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671" exitCode=0 Dec 12 05:45:24 crc kubenswrapper[4796]: I1212 05:45:24.757602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerDied","Data":"782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671"} Dec 12 05:45:24 crc kubenswrapper[4796]: I1212 05:45:24.759496 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:45:26 crc kubenswrapper[4796]: I1212 05:45:26.778831 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerStarted","Data":"66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d"} Dec 12 05:45:29 crc kubenswrapper[4796]: I1212 05:45:29.803788 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerID="66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d" exitCode=0 Dec 12 05:45:29 crc kubenswrapper[4796]: I1212 05:45:29.803870 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerDied","Data":"66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d"} Dec 12 05:45:30 crc kubenswrapper[4796]: I1212 05:45:30.826330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerStarted","Data":"050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5"} Dec 12 05:45:30 crc kubenswrapper[4796]: I1212 05:45:30.848416 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lktdf" podStartSLOduration=3.058727801 podStartE2EDuration="8.848395567s" podCreationTimestamp="2025-12-12 05:45:22 +0000 UTC" firstStartedPulling="2025-12-12 05:45:24.759195266 +0000 UTC m=+4315.635212413" lastFinishedPulling="2025-12-12 05:45:30.548863032 +0000 UTC m=+4321.424880179" observedRunningTime="2025-12-12 05:45:30.846761316 +0000 UTC m=+4321.722778493" watchObservedRunningTime="2025-12-12 05:45:30.848395567 +0000 UTC m=+4321.724412714" Dec 12 05:45:32 crc kubenswrapper[4796]: I1212 05:45:32.970120 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:45:32 crc kubenswrapper[4796]: I1212 05:45:32.970182 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:45:33 crc kubenswrapper[4796]: I1212 05:45:33.203147 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:33 crc kubenswrapper[4796]: I1212 05:45:33.203211 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:34 crc kubenswrapper[4796]: I1212 05:45:34.251506 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lktdf" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="registry-server" probeResult="failure" output=< Dec 12 05:45:34 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 05:45:34 crc kubenswrapper[4796]: > Dec 12 05:45:43 crc kubenswrapper[4796]: I1212 05:45:43.246395 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:43 crc kubenswrapper[4796]: I1212 05:45:43.302752 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:43 crc kubenswrapper[4796]: I1212 05:45:43.482820 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lktdf"] Dec 12 05:45:44 crc kubenswrapper[4796]: I1212 05:45:44.952679 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lktdf" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="registry-server" containerID="cri-o://050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5" gracePeriod=2 Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.454738 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.543568 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-catalog-content\") pod \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.543729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c87st\" (UniqueName: \"kubernetes.io/projected/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-kube-api-access-c87st\") pod \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.543781 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-utilities\") pod \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\" (UID: \"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66\") " Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.544966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-utilities" (OuterVolumeSpecName: "utilities") pod "2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" (UID: "2f6046f1-e4d8-41e0-96fd-4b7d8f391f66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.550886 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-kube-api-access-c87st" (OuterVolumeSpecName: "kube-api-access-c87st") pod "2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" (UID: "2f6046f1-e4d8-41e0-96fd-4b7d8f391f66"). InnerVolumeSpecName "kube-api-access-c87st". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.645661 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c87st\" (UniqueName: \"kubernetes.io/projected/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-kube-api-access-c87st\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.645828 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.667413 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" (UID: "2f6046f1-e4d8-41e0-96fd-4b7d8f391f66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.748471 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.962757 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerID="050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5" exitCode=0 Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.963066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerDied","Data":"050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5"} Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.963092 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lktdf" event={"ID":"2f6046f1-e4d8-41e0-96fd-4b7d8f391f66","Type":"ContainerDied","Data":"01d729e69f79231f8464fc2e8e3ba47ec2e0bf596c73b9c663d9bb7ebb18a9f1"} Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.963110 4796 scope.go:117] "RemoveContainer" containerID="050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.963232 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lktdf" Dec 12 05:45:45 crc kubenswrapper[4796]: I1212 05:45:45.990735 4796 scope.go:117] "RemoveContainer" containerID="66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.030876 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lktdf"] Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.033040 4796 scope.go:117] "RemoveContainer" containerID="782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.039560 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lktdf"] Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.062242 4796 scope.go:117] "RemoveContainer" containerID="050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5" Dec 12 05:45:46 crc kubenswrapper[4796]: E1212 05:45:46.062859 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5\": container with ID starting with 050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5 not found: ID does not exist" containerID="050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.062950 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5"} err="failed to get container status \"050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5\": rpc error: code = NotFound desc = could not find container \"050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5\": container with ID starting with 050ca3f228d1c0d32e6376dab9487dc86575526be17f88c1b951ba672f368ae5 not found: ID does not exist" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.063032 4796 scope.go:117] "RemoveContainer" containerID="66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d" Dec 12 05:45:46 crc kubenswrapper[4796]: E1212 05:45:46.063422 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d\": container with ID starting with 66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d not found: ID does not exist" containerID="66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.063503 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d"} err="failed to get container status \"66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d\": rpc error: code = NotFound desc = could not find container \"66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d\": container with ID starting with 66b553e8101c4b12134f30c0956d3f18ef9df9df8572ee8ccad04c80cf8a225d not found: ID does not exist" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.063568 4796 scope.go:117] "RemoveContainer" containerID="782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671" Dec 12 05:45:46 crc kubenswrapper[4796]: E1212 05:45:46.063814 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671\": container with ID starting with 782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671 not found: ID does not exist" containerID="782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.063882 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671"} err="failed to get container status \"782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671\": rpc error: code = NotFound desc = could not find container \"782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671\": container with ID starting with 782fccaaccbd48a4deb9d0bee3001fa748d9d3231ce5d0118e71b39aa6809671 not found: ID does not exist" Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.981971 4796 generic.go:334] "Generic (PLEG): container finished" podID="5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" containerID="c31077ca704001a6615eaecea7dbdac674ec32e30484008d79c005e728d9eeae" exitCode=0 Dec 12 05:45:46 crc kubenswrapper[4796]: I1212 05:45:46.982078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd","Type":"ContainerDied","Data":"c31077ca704001a6615eaecea7dbdac674ec32e30484008d79c005e728d9eeae"} Dec 12 05:45:47 crc kubenswrapper[4796]: I1212 05:45:47.427748 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" path="/var/lib/kubelet/pods/2f6046f1-e4d8-41e0-96fd-4b7d8f391f66/volumes" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.424030 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.507589 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-workdir\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.507707 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.507782 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ca-certs\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.507971 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-config-data\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.508029 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j44dj\" (UniqueName: \"kubernetes.io/projected/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-kube-api-access-j44dj\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.508053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config-secret\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.508081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-temporary\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.508108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ssh-key\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.508139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\" (UID: \"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd\") " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.509797 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-config-data" (OuterVolumeSpecName: "config-data") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.511386 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.515912 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.613183 4796 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.613220 4796 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.613236 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.884060 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.891455 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-kube-api-access-j44dj" (OuterVolumeSpecName: "kube-api-access-j44dj") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "kube-api-access-j44dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.903366 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.908656 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.919298 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j44dj\" (UniqueName: \"kubernetes.io/projected/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-kube-api-access-j44dj\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.919337 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.919349 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.920240 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.920472 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.938593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" (UID: "5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 05:45:48 crc kubenswrapper[4796]: I1212 05:45:48.948930 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 12 05:45:49 crc kubenswrapper[4796]: I1212 05:45:49.009261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd","Type":"ContainerDied","Data":"64fc8959b3929f6b525711ba5d2aeec44b60aca24064a83e25538c5d02aaf520"} Dec 12 05:45:49 crc kubenswrapper[4796]: I1212 05:45:49.009342 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fc8959b3929f6b525711ba5d2aeec44b60aca24064a83e25538c5d02aaf520" Dec 12 05:45:49 crc kubenswrapper[4796]: I1212 05:45:49.009477 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 12 05:45:49 crc kubenswrapper[4796]: I1212 05:45:49.023858 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:49 crc kubenswrapper[4796]: I1212 05:45:49.023912 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:49 crc kubenswrapper[4796]: I1212 05:45:49.023922 4796 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.183448 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 12 05:45:58 crc kubenswrapper[4796]: E1212 05:45:58.184296 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="extract-content" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.184319 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="extract-content" Dec 12 05:45:58 crc kubenswrapper[4796]: E1212 05:45:58.184333 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" containerName="tempest-tests-tempest-tests-runner" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.184339 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" containerName="tempest-tests-tempest-tests-runner" Dec 12 05:45:58 crc kubenswrapper[4796]: E1212 05:45:58.184347 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="registry-server" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.184356 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="registry-server" Dec 12 05:45:58 crc kubenswrapper[4796]: E1212 05:45:58.184399 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="extract-utilities" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.184407 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="extract-utilities" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.184590 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd" containerName="tempest-tests-tempest-tests-runner" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.184607 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6046f1-e4d8-41e0-96fd-4b7d8f391f66" containerName="registry-server" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.185244 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.188930 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g5wb6" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.193639 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.288331 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrxp\" (UniqueName: \"kubernetes.io/projected/e58842bb-2205-41c3-90d5-1d87ec35baf5-kube-api-access-5xrxp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.288436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.390565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.390942 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.391643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrxp\" (UniqueName: \"kubernetes.io/projected/e58842bb-2205-41c3-90d5-1d87ec35baf5-kube-api-access-5xrxp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.420119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrxp\" (UniqueName: \"kubernetes.io/projected/e58842bb-2205-41c3-90d5-1d87ec35baf5-kube-api-access-5xrxp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.480008 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e58842bb-2205-41c3-90d5-1d87ec35baf5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:58 crc kubenswrapper[4796]: I1212 05:45:58.513831 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 12 05:45:59 crc kubenswrapper[4796]: I1212 05:45:59.509457 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 12 05:46:00 crc kubenswrapper[4796]: I1212 05:46:00.103431 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e58842bb-2205-41c3-90d5-1d87ec35baf5","Type":"ContainerStarted","Data":"b7f083ddc048f928e978ecd1a4edb483492352a57261803e416a3d24ff2d90a7"} Dec 12 05:46:01 crc kubenswrapper[4796]: I1212 05:46:01.114920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e58842bb-2205-41c3-90d5-1d87ec35baf5","Type":"ContainerStarted","Data":"8e47c5726f8018a0fadffce72e0484e46c25cc678749c1db6e23aae2f13805a5"} Dec 12 05:46:01 crc kubenswrapper[4796]: I1212 05:46:01.130166 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.154341947 podStartE2EDuration="3.130145684s" podCreationTimestamp="2025-12-12 05:45:58 +0000 UTC" firstStartedPulling="2025-12-12 05:45:59.518117406 +0000 UTC m=+4350.394134553" lastFinishedPulling="2025-12-12 05:46:00.493921143 +0000 UTC m=+4351.369938290" observedRunningTime="2025-12-12 05:46:01.128078209 +0000 UTC m=+4352.004095366" watchObservedRunningTime="2025-12-12 05:46:01.130145684 +0000 UTC m=+4352.006162851" Dec 12 05:46:02 crc kubenswrapper[4796]: I1212 05:46:02.970222 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:46:02 crc kubenswrapper[4796]: I1212 05:46:02.970730 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:46:02 crc kubenswrapper[4796]: I1212 05:46:02.970814 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:46:02 crc kubenswrapper[4796]: I1212 05:46:02.971716 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6cfd5f0abab7e1b9b74d6d43adf1c520251ed6b7b5b267ad9722c2e646315aef"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:46:02 crc kubenswrapper[4796]: I1212 05:46:02.971813 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://6cfd5f0abab7e1b9b74d6d43adf1c520251ed6b7b5b267ad9722c2e646315aef" gracePeriod=600 Dec 12 05:46:03 crc kubenswrapper[4796]: I1212 05:46:03.137456 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="6cfd5f0abab7e1b9b74d6d43adf1c520251ed6b7b5b267ad9722c2e646315aef" exitCode=0 Dec 12 05:46:03 crc kubenswrapper[4796]: I1212 05:46:03.137520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"6cfd5f0abab7e1b9b74d6d43adf1c520251ed6b7b5b267ad9722c2e646315aef"} Dec 12 05:46:03 crc kubenswrapper[4796]: I1212 05:46:03.137561 4796 scope.go:117] "RemoveContainer" containerID="3d4029d3ab552c262b73a8a6266465f091b9ff969b052079efa8da25b6a8f018" Dec 12 05:46:04 crc kubenswrapper[4796]: I1212 05:46:04.001876 4796 scope.go:117] "RemoveContainer" containerID="d5010b54026484c2fd99d98da272d69020363a12abe53537bff8a85f55283cc2" Dec 12 05:46:04 crc kubenswrapper[4796]: I1212 05:46:04.147295 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b"} Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.393653 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-687pd/must-gather-8jfrm"] Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.396632 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.402427 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-687pd"/"openshift-service-ca.crt" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.402956 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-687pd"/"kube-root-ca.crt" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.402967 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-687pd"/"default-dockercfg-nj26j" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.424011 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-687pd/must-gather-8jfrm"] Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.575900 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-must-gather-output\") pod \"must-gather-8jfrm\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.575995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxxs\" (UniqueName: \"kubernetes.io/projected/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-kube-api-access-6wxxs\") pod \"must-gather-8jfrm\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.677936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wxxs\" (UniqueName: \"kubernetes.io/projected/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-kube-api-access-6wxxs\") pod \"must-gather-8jfrm\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.678144 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-must-gather-output\") pod \"must-gather-8jfrm\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.678677 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-must-gather-output\") pod \"must-gather-8jfrm\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.699946 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wxxs\" (UniqueName: \"kubernetes.io/projected/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-kube-api-access-6wxxs\") pod \"must-gather-8jfrm\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:23 crc kubenswrapper[4796]: I1212 05:46:23.717596 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:46:24 crc kubenswrapper[4796]: I1212 05:46:24.165043 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-687pd/must-gather-8jfrm"] Dec 12 05:46:24 crc kubenswrapper[4796]: I1212 05:46:24.316509 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/must-gather-8jfrm" event={"ID":"9399ad7e-d3f0-41d6-bc32-bae910aab5ff","Type":"ContainerStarted","Data":"1a7fa0a82c96977d934062d5856f4e7cdf05c064306d6f778b260bda54ce110f"} Dec 12 05:46:32 crc kubenswrapper[4796]: I1212 05:46:32.399876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/must-gather-8jfrm" event={"ID":"9399ad7e-d3f0-41d6-bc32-bae910aab5ff","Type":"ContainerStarted","Data":"0340187e3cc100a7b49b63bfd4c53eef80164932adc4e72dc262d0bb6946a1b7"} Dec 12 05:46:32 crc kubenswrapper[4796]: I1212 05:46:32.400485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/must-gather-8jfrm" event={"ID":"9399ad7e-d3f0-41d6-bc32-bae910aab5ff","Type":"ContainerStarted","Data":"34d188acd38e6cda61163af65da56837ad63784a214f6164d6836d9d732ca759"} Dec 12 05:46:32 crc kubenswrapper[4796]: I1212 05:46:32.432344 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-687pd/must-gather-8jfrm" podStartSLOduration=1.9819203810000001 podStartE2EDuration="9.432330229s" podCreationTimestamp="2025-12-12 05:46:23 +0000 UTC" firstStartedPulling="2025-12-12 05:46:24.169417052 +0000 UTC m=+4375.045434199" lastFinishedPulling="2025-12-12 05:46:31.61982686 +0000 UTC m=+4382.495844047" observedRunningTime="2025-12-12 05:46:32.428392874 +0000 UTC m=+4383.304410021" watchObservedRunningTime="2025-12-12 05:46:32.432330229 +0000 UTC m=+4383.308347376" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.179597 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-687pd/crc-debug-v9nmn"] Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.181670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.258369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6hs\" (UniqueName: \"kubernetes.io/projected/1dc92630-e38c-4b35-b96d-4162e1f0d37e-kube-api-access-nf6hs\") pod \"crc-debug-v9nmn\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.258429 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dc92630-e38c-4b35-b96d-4162e1f0d37e-host\") pod \"crc-debug-v9nmn\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.361170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6hs\" (UniqueName: \"kubernetes.io/projected/1dc92630-e38c-4b35-b96d-4162e1f0d37e-kube-api-access-nf6hs\") pod \"crc-debug-v9nmn\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.361492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dc92630-e38c-4b35-b96d-4162e1f0d37e-host\") pod \"crc-debug-v9nmn\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.361622 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dc92630-e38c-4b35-b96d-4162e1f0d37e-host\") pod \"crc-debug-v9nmn\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.397254 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6hs\" (UniqueName: \"kubernetes.io/projected/1dc92630-e38c-4b35-b96d-4162e1f0d37e-kube-api-access-nf6hs\") pod \"crc-debug-v9nmn\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:37 crc kubenswrapper[4796]: I1212 05:46:37.511815 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:46:38 crc kubenswrapper[4796]: I1212 05:46:38.480418 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-v9nmn" event={"ID":"1dc92630-e38c-4b35-b96d-4162e1f0d37e","Type":"ContainerStarted","Data":"8bd55d37d9743e7536e0e6044a794c8910e3d1a83ab1274433577893c28d8c3e"} Dec 12 05:46:51 crc kubenswrapper[4796]: I1212 05:46:51.592688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-v9nmn" event={"ID":"1dc92630-e38c-4b35-b96d-4162e1f0d37e","Type":"ContainerStarted","Data":"b69730bbd7a68f30c727253bb0336d21fc4f5c06c4c696535f3e9de77d22ba11"} Dec 12 05:46:51 crc kubenswrapper[4796]: I1212 05:46:51.610802 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-687pd/crc-debug-v9nmn" podStartSLOduration=1.576640223 podStartE2EDuration="14.610782915s" podCreationTimestamp="2025-12-12 05:46:37 +0000 UTC" firstStartedPulling="2025-12-12 05:46:37.576519337 +0000 UTC m=+4388.452536484" lastFinishedPulling="2025-12-12 05:46:50.610662029 +0000 UTC m=+4401.486679176" observedRunningTime="2025-12-12 05:46:51.607370918 +0000 UTC m=+4402.483388065" watchObservedRunningTime="2025-12-12 05:46:51.610782915 +0000 UTC m=+4402.486800062" Dec 12 05:47:46 crc kubenswrapper[4796]: I1212 05:47:46.137971 4796 generic.go:334] "Generic (PLEG): container finished" podID="1dc92630-e38c-4b35-b96d-4162e1f0d37e" containerID="b69730bbd7a68f30c727253bb0336d21fc4f5c06c4c696535f3e9de77d22ba11" exitCode=0 Dec 12 05:47:46 crc kubenswrapper[4796]: I1212 05:47:46.138156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-v9nmn" event={"ID":"1dc92630-e38c-4b35-b96d-4162e1f0d37e","Type":"ContainerDied","Data":"b69730bbd7a68f30c727253bb0336d21fc4f5c06c4c696535f3e9de77d22ba11"} Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.245511 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.294404 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-687pd/crc-debug-v9nmn"] Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.303579 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-687pd/crc-debug-v9nmn"] Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.348923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf6hs\" (UniqueName: \"kubernetes.io/projected/1dc92630-e38c-4b35-b96d-4162e1f0d37e-kube-api-access-nf6hs\") pod \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.348988 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dc92630-e38c-4b35-b96d-4162e1f0d37e-host\") pod \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\" (UID: \"1dc92630-e38c-4b35-b96d-4162e1f0d37e\") " Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.349337 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dc92630-e38c-4b35-b96d-4162e1f0d37e-host" (OuterVolumeSpecName: "host") pod "1dc92630-e38c-4b35-b96d-4162e1f0d37e" (UID: "1dc92630-e38c-4b35-b96d-4162e1f0d37e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.349590 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1dc92630-e38c-4b35-b96d-4162e1f0d37e-host\") on node \"crc\" DevicePath \"\"" Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.355527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc92630-e38c-4b35-b96d-4162e1f0d37e-kube-api-access-nf6hs" (OuterVolumeSpecName: "kube-api-access-nf6hs") pod "1dc92630-e38c-4b35-b96d-4162e1f0d37e" (UID: "1dc92630-e38c-4b35-b96d-4162e1f0d37e"). InnerVolumeSpecName "kube-api-access-nf6hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.421999 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc92630-e38c-4b35-b96d-4162e1f0d37e" path="/var/lib/kubelet/pods/1dc92630-e38c-4b35-b96d-4162e1f0d37e/volumes" Dec 12 05:47:47 crc kubenswrapper[4796]: I1212 05:47:47.451073 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf6hs\" (UniqueName: \"kubernetes.io/projected/1dc92630-e38c-4b35-b96d-4162e1f0d37e-kube-api-access-nf6hs\") on node \"crc\" DevicePath \"\"" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.183592 4796 scope.go:117] "RemoveContainer" containerID="b69730bbd7a68f30c727253bb0336d21fc4f5c06c4c696535f3e9de77d22ba11" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.183955 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-v9nmn" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.471062 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-687pd/crc-debug-zkkmk"] Dec 12 05:47:48 crc kubenswrapper[4796]: E1212 05:47:48.472895 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc92630-e38c-4b35-b96d-4162e1f0d37e" containerName="container-00" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.472914 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc92630-e38c-4b35-b96d-4162e1f0d37e" containerName="container-00" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.473255 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc92630-e38c-4b35-b96d-4162e1f0d37e" containerName="container-00" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.474102 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.480359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956ad2be-5457-4e08-8b96-76843fb7249c-host\") pod \"crc-debug-zkkmk\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.480417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfbk\" (UniqueName: \"kubernetes.io/projected/956ad2be-5457-4e08-8b96-76843fb7249c-kube-api-access-8hfbk\") pod \"crc-debug-zkkmk\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.582288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956ad2be-5457-4e08-8b96-76843fb7249c-host\") pod \"crc-debug-zkkmk\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.582567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hfbk\" (UniqueName: \"kubernetes.io/projected/956ad2be-5457-4e08-8b96-76843fb7249c-kube-api-access-8hfbk\") pod \"crc-debug-zkkmk\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.582411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956ad2be-5457-4e08-8b96-76843fb7249c-host\") pod \"crc-debug-zkkmk\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.609184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hfbk\" (UniqueName: \"kubernetes.io/projected/956ad2be-5457-4e08-8b96-76843fb7249c-kube-api-access-8hfbk\") pod \"crc-debug-zkkmk\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:48 crc kubenswrapper[4796]: I1212 05:47:48.789911 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:49 crc kubenswrapper[4796]: I1212 05:47:49.194135 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-zkkmk" event={"ID":"956ad2be-5457-4e08-8b96-76843fb7249c","Type":"ContainerStarted","Data":"dca5bc49dc613fdb4d5445965fdd6bda6fd623a3d791ee990b68764557cc07f3"} Dec 12 05:47:49 crc kubenswrapper[4796]: I1212 05:47:49.194479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-zkkmk" event={"ID":"956ad2be-5457-4e08-8b96-76843fb7249c","Type":"ContainerStarted","Data":"8e0bbe06e9b5832ff9655e81cb164105c3d54b83aa5d64a4e42813f95731efde"} Dec 12 05:47:49 crc kubenswrapper[4796]: I1212 05:47:49.215243 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-687pd/crc-debug-zkkmk" podStartSLOduration=1.215228306 podStartE2EDuration="1.215228306s" podCreationTimestamp="2025-12-12 05:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:47:49.205440227 +0000 UTC m=+4460.081457394" watchObservedRunningTime="2025-12-12 05:47:49.215228306 +0000 UTC m=+4460.091245453" Dec 12 05:47:50 crc kubenswrapper[4796]: I1212 05:47:50.203433 4796 generic.go:334] "Generic (PLEG): container finished" podID="956ad2be-5457-4e08-8b96-76843fb7249c" containerID="dca5bc49dc613fdb4d5445965fdd6bda6fd623a3d791ee990b68764557cc07f3" exitCode=0 Dec 12 05:47:50 crc kubenswrapper[4796]: I1212 05:47:50.203528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-zkkmk" event={"ID":"956ad2be-5457-4e08-8b96-76843fb7249c","Type":"ContainerDied","Data":"dca5bc49dc613fdb4d5445965fdd6bda6fd623a3d791ee990b68764557cc07f3"} Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.308689 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.432895 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfbk\" (UniqueName: \"kubernetes.io/projected/956ad2be-5457-4e08-8b96-76843fb7249c-kube-api-access-8hfbk\") pod \"956ad2be-5457-4e08-8b96-76843fb7249c\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.432949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956ad2be-5457-4e08-8b96-76843fb7249c-host\") pod \"956ad2be-5457-4e08-8b96-76843fb7249c\" (UID: \"956ad2be-5457-4e08-8b96-76843fb7249c\") " Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.434381 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/956ad2be-5457-4e08-8b96-76843fb7249c-host" (OuterVolumeSpecName: "host") pod "956ad2be-5457-4e08-8b96-76843fb7249c" (UID: "956ad2be-5457-4e08-8b96-76843fb7249c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.449993 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956ad2be-5457-4e08-8b96-76843fb7249c-kube-api-access-8hfbk" (OuterVolumeSpecName: "kube-api-access-8hfbk") pod "956ad2be-5457-4e08-8b96-76843fb7249c" (UID: "956ad2be-5457-4e08-8b96-76843fb7249c"). InnerVolumeSpecName "kube-api-access-8hfbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.535676 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hfbk\" (UniqueName: \"kubernetes.io/projected/956ad2be-5457-4e08-8b96-76843fb7249c-kube-api-access-8hfbk\") on node \"crc\" DevicePath \"\"" Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.535708 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956ad2be-5457-4e08-8b96-76843fb7249c-host\") on node \"crc\" DevicePath \"\"" Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.629841 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-687pd/crc-debug-zkkmk"] Dec 12 05:47:51 crc kubenswrapper[4796]: I1212 05:47:51.639344 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-687pd/crc-debug-zkkmk"] Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.226817 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e0bbe06e9b5832ff9655e81cb164105c3d54b83aa5d64a4e42813f95731efde" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.226902 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-zkkmk" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.836392 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-687pd/crc-debug-qgp69"] Dec 12 05:47:52 crc kubenswrapper[4796]: E1212 05:47:52.836747 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956ad2be-5457-4e08-8b96-76843fb7249c" containerName="container-00" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.836759 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="956ad2be-5457-4e08-8b96-76843fb7249c" containerName="container-00" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.836963 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="956ad2be-5457-4e08-8b96-76843fb7249c" containerName="container-00" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.837584 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.962841 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvgr\" (UniqueName: \"kubernetes.io/projected/8000f98e-01a1-47ff-8d54-c9cd1bb27165-kube-api-access-wlvgr\") pod \"crc-debug-qgp69\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:52 crc kubenswrapper[4796]: I1212 05:47:52.963446 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000f98e-01a1-47ff-8d54-c9cd1bb27165-host\") pod \"crc-debug-qgp69\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.064952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlvgr\" (UniqueName: \"kubernetes.io/projected/8000f98e-01a1-47ff-8d54-c9cd1bb27165-kube-api-access-wlvgr\") pod \"crc-debug-qgp69\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.065068 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000f98e-01a1-47ff-8d54-c9cd1bb27165-host\") pod \"crc-debug-qgp69\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.065217 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000f98e-01a1-47ff-8d54-c9cd1bb27165-host\") pod \"crc-debug-qgp69\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.093404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlvgr\" (UniqueName: \"kubernetes.io/projected/8000f98e-01a1-47ff-8d54-c9cd1bb27165-kube-api-access-wlvgr\") pod \"crc-debug-qgp69\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.154100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.236799 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-qgp69" event={"ID":"8000f98e-01a1-47ff-8d54-c9cd1bb27165","Type":"ContainerStarted","Data":"5f26861febbf51ac41464ce63457d4878c69e1989ad1f792c917d0924b7dafda"} Dec 12 05:47:53 crc kubenswrapper[4796]: I1212 05:47:53.427530 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956ad2be-5457-4e08-8b96-76843fb7249c" path="/var/lib/kubelet/pods/956ad2be-5457-4e08-8b96-76843fb7249c/volumes" Dec 12 05:47:54 crc kubenswrapper[4796]: I1212 05:47:54.247687 4796 generic.go:334] "Generic (PLEG): container finished" podID="8000f98e-01a1-47ff-8d54-c9cd1bb27165" containerID="fe1b5c8210f29b9c5f5b8773aadeac765182014e676e725314c9a54aecf789c6" exitCode=0 Dec 12 05:47:54 crc kubenswrapper[4796]: I1212 05:47:54.247738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/crc-debug-qgp69" event={"ID":"8000f98e-01a1-47ff-8d54-c9cd1bb27165","Type":"ContainerDied","Data":"fe1b5c8210f29b9c5f5b8773aadeac765182014e676e725314c9a54aecf789c6"} Dec 12 05:47:54 crc kubenswrapper[4796]: I1212 05:47:54.293910 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-687pd/crc-debug-qgp69"] Dec 12 05:47:54 crc kubenswrapper[4796]: I1212 05:47:54.304186 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-687pd/crc-debug-qgp69"] Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.375514 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.512315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlvgr\" (UniqueName: \"kubernetes.io/projected/8000f98e-01a1-47ff-8d54-c9cd1bb27165-kube-api-access-wlvgr\") pod \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.512380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000f98e-01a1-47ff-8d54-c9cd1bb27165-host\") pod \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\" (UID: \"8000f98e-01a1-47ff-8d54-c9cd1bb27165\") " Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.512563 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8000f98e-01a1-47ff-8d54-c9cd1bb27165-host" (OuterVolumeSpecName: "host") pod "8000f98e-01a1-47ff-8d54-c9cd1bb27165" (UID: "8000f98e-01a1-47ff-8d54-c9cd1bb27165"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.514038 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000f98e-01a1-47ff-8d54-c9cd1bb27165-host\") on node \"crc\" DevicePath \"\"" Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.528479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8000f98e-01a1-47ff-8d54-c9cd1bb27165-kube-api-access-wlvgr" (OuterVolumeSpecName: "kube-api-access-wlvgr") pod "8000f98e-01a1-47ff-8d54-c9cd1bb27165" (UID: "8000f98e-01a1-47ff-8d54-c9cd1bb27165"). InnerVolumeSpecName "kube-api-access-wlvgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:47:55 crc kubenswrapper[4796]: I1212 05:47:55.615695 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlvgr\" (UniqueName: \"kubernetes.io/projected/8000f98e-01a1-47ff-8d54-c9cd1bb27165-kube-api-access-wlvgr\") on node \"crc\" DevicePath \"\"" Dec 12 05:47:56 crc kubenswrapper[4796]: I1212 05:47:56.264147 4796 scope.go:117] "RemoveContainer" containerID="fe1b5c8210f29b9c5f5b8773aadeac765182014e676e725314c9a54aecf789c6" Dec 12 05:47:56 crc kubenswrapper[4796]: I1212 05:47:56.264183 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/crc-debug-qgp69" Dec 12 05:47:57 crc kubenswrapper[4796]: I1212 05:47:57.421539 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8000f98e-01a1-47ff-8d54-c9cd1bb27165" path="/var/lib/kubelet/pods/8000f98e-01a1-47ff-8d54-c9cd1bb27165/volumes" Dec 12 05:48:12 crc kubenswrapper[4796]: I1212 05:48:12.673492 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c7c5bcf6-phtcl_1c5fd6e6-e8f6-46da-81fa-5ae035fbc255/barbican-api/0.log" Dec 12 05:48:12 crc kubenswrapper[4796]: I1212 05:48:12.739991 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c7c5bcf6-phtcl_1c5fd6e6-e8f6-46da-81fa-5ae035fbc255/barbican-api-log/0.log" Dec 12 05:48:12 crc kubenswrapper[4796]: I1212 05:48:12.874352 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8586b565b6-wdsdw_1d1a0aba-21c7-4f4f-95f8-41802b2d23c3/barbican-keystone-listener/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.111643 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8586b565b6-wdsdw_1d1a0aba-21c7-4f4f-95f8-41802b2d23c3/barbican-keystone-listener-log/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.126600 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56fbf7b8ff-h4cs5_92265cff-6059-4736-a5cb-8935972c0bb8/barbican-worker/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.197088 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56fbf7b8ff-h4cs5_92265cff-6059-4736-a5cb-8935972c0bb8/barbican-worker-log/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.374506 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6_86779d4a-5602-4b32-8e50-cd72fac17e8a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.463072 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/ceilometer-central-agent/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.570620 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/ceilometer-notification-agent/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.575149 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/proxy-httpd/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.641815 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/sg-core/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.762230 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b2d0ca4f-8c51-492b-ae06-3d09ecdc4934/cinder-api-log/0.log" Dec 12 05:48:13 crc kubenswrapper[4796]: I1212 05:48:13.872877 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b2d0ca4f-8c51-492b-ae06-3d09ecdc4934/cinder-api/0.log" Dec 12 05:48:14 crc kubenswrapper[4796]: I1212 05:48:14.053309 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ec3a4988-59e7-443a-bbf1-31cd16abdcd6/cinder-scheduler/0.log" Dec 12 05:48:14 crc kubenswrapper[4796]: I1212 05:48:14.116418 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ec3a4988-59e7-443a-bbf1-31cd16abdcd6/probe/0.log" Dec 12 05:48:14 crc kubenswrapper[4796]: I1212 05:48:14.308995 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-22ksj_906f3822-cad4-497a-a87e-d50a257f3b15/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:14 crc kubenswrapper[4796]: I1212 05:48:14.352988 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6qncr_33889558-2c62-4dcd-ba10-c98855839d1e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:15 crc kubenswrapper[4796]: I1212 05:48:15.716985 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d47554775-pbd74_ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7/init/0.log" Dec 12 05:48:16 crc kubenswrapper[4796]: I1212 05:48:16.295996 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d47554775-pbd74_ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7/init/0.log" Dec 12 05:48:16 crc kubenswrapper[4796]: I1212 05:48:16.499356 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d47554775-pbd74_ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7/dnsmasq-dns/0.log" Dec 12 05:48:16 crc kubenswrapper[4796]: I1212 05:48:16.533398 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ckkts_45182716-6fae-4d42-81e2-ccdea8bf145b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:16 crc kubenswrapper[4796]: I1212 05:48:16.842224 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4cd46cb5-ba6c-480f-a039-95a66caa648a/glance-httpd/0.log" Dec 12 05:48:16 crc kubenswrapper[4796]: I1212 05:48:16.847795 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4cd46cb5-ba6c-480f-a039-95a66caa648a/glance-log/0.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.001788 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b88340e6-0adf-40e5-9e91-610c949cd71b/glance-httpd/0.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.183782 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b88340e6-0adf-40e5-9e91-610c949cd71b/glance-log/0.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.246114 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb55bccb4-z8p6q_7913672c-384c-472c-89a8-0d546f345a28/horizon/3.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.345464 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb55bccb4-z8p6q_7913672c-384c-472c-89a8-0d546f345a28/horizon/2.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.657511 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6_f181d2cb-61a4-4328-88b8-18dd8cd24228/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.683810 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4tkgt_b92a97d2-b9e1-4717-a79c-a085aaaed3b6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:17 crc kubenswrapper[4796]: I1212 05:48:17.752555 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb55bccb4-z8p6q_7913672c-384c-472c-89a8-0d546f345a28/horizon-log/0.log" Dec 12 05:48:18 crc kubenswrapper[4796]: I1212 05:48:18.006395 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29425261-srdtz_7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9/keystone-cron/0.log" Dec 12 05:48:18 crc kubenswrapper[4796]: I1212 05:48:18.090543 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_416b9e99-eb64-4c24-9c32-0fb5bc210a2a/kube-state-metrics/0.log" Dec 12 05:48:18 crc kubenswrapper[4796]: I1212 05:48:18.275841 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-86bc7ff485-lzxvk_b621dfe8-e202-40a6-8544-9195e0d7dc80/keystone-api/0.log" Dec 12 05:48:18 crc kubenswrapper[4796]: I1212 05:48:18.338106 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4_61d49bcc-8f04-4fc8-8f61-70e5cc450c5a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:18 crc kubenswrapper[4796]: I1212 05:48:18.991088 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s_177571dd-6d0b-463d-8831-2983eb8a331d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:19 crc kubenswrapper[4796]: I1212 05:48:19.087952 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c45989d6c-2r8mn_c2090789-6394-4377-8d8c-4c37cd7bd857/neutron-httpd/0.log" Dec 12 05:48:19 crc kubenswrapper[4796]: I1212 05:48:19.176776 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c45989d6c-2r8mn_c2090789-6394-4377-8d8c-4c37cd7bd857/neutron-api/0.log" Dec 12 05:48:20 crc kubenswrapper[4796]: I1212 05:48:20.196641 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cd3c1c91-f0c1-4dd0-b23e-227f1353858a/nova-cell0-conductor-conductor/0.log" Dec 12 05:48:20 crc kubenswrapper[4796]: I1212 05:48:20.398982 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e91e5c81-6ced-4f8f-b7ba-c40f35e989ca/nova-cell1-conductor-conductor/0.log" Dec 12 05:48:20 crc kubenswrapper[4796]: I1212 05:48:20.737961 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f70e0dab-4b7c-4e6f-b28e-76e72492ca1d/nova-api-log/0.log" Dec 12 05:48:21 crc kubenswrapper[4796]: I1212 05:48:21.192873 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cb994df9-2eed-4089-9770-ccb138bf3c80/nova-cell1-novncproxy-novncproxy/0.log" Dec 12 05:48:21 crc kubenswrapper[4796]: I1212 05:48:21.208557 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f70e0dab-4b7c-4e6f-b28e-76e72492ca1d/nova-api-api/0.log" Dec 12 05:48:21 crc kubenswrapper[4796]: I1212 05:48:21.290221 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rl279_a4dc653f-0e4f-4c95-a71a-c96d4419f484/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:21 crc kubenswrapper[4796]: I1212 05:48:21.695614 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_556757b9-1c0e-4bc0-8a0f-81a77ab8705b/nova-metadata-log/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.021738 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3338cb28-50b7-41c6-af36-ec2fb86fb949/mysql-bootstrap/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.260414 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3338cb28-50b7-41c6-af36-ec2fb86fb949/galera/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.275688 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_df191340-1fad-4c88-b12c-a4af0fc96924/nova-scheduler-scheduler/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.312919 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3338cb28-50b7-41c6-af36-ec2fb86fb949/mysql-bootstrap/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.531974 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4b59263e-1bd8-4661-b612-2f4bc4f611f1/mysql-bootstrap/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.735120 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4b59263e-1bd8-4661-b612-2f4bc4f611f1/mysql-bootstrap/0.log" Dec 12 05:48:22 crc kubenswrapper[4796]: I1212 05:48:22.757418 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4b59263e-1bd8-4661-b612-2f4bc4f611f1/galera/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.000784 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9826fd92-e55e-487f-ac6a-73a3e7f4d88a/openstackclient/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.219245 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g7lfn_2dc1f12e-5104-4f56-ae2a-da52e2f60434/openstack-network-exporter/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.377556 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ml9sj_0751eb6e-3452-4b8d-abfa-d37121e1a03e/ovn-controller/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.389795 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_556757b9-1c0e-4bc0-8a0f-81a77ab8705b/nova-metadata-metadata/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.634747 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovsdb-server-init/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.853671 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovsdb-server/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.898077 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovsdb-server-init/0.log" Dec 12 05:48:23 crc kubenswrapper[4796]: I1212 05:48:23.904672 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovs-vswitchd/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.181394 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tkc6x_60d6d74d-f5f7-43c4-8462-f073926de480/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.221886 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3ad884c-e210-4b14-b98b-19d888c3886d/openstack-network-exporter/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.277192 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3ad884c-e210-4b14-b98b-19d888c3886d/ovn-northd/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.458030 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7de6f8df-6271-4b09-94c5-642c37337fcf/openstack-network-exporter/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.486124 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7de6f8df-6271-4b09-94c5-642c37337fcf/ovsdbserver-nb/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.862177 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5a25b030-6ebc-4ac2-8114-f24663c7a815/ovsdbserver-sb/0.log" Dec 12 05:48:24 crc kubenswrapper[4796]: I1212 05:48:24.911660 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5a25b030-6ebc-4ac2-8114-f24663c7a815/openstack-network-exporter/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.070405 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8ffdd64b-7gmkf_724e3890-930d-4492-8599-460add96a852/placement-api/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.192699 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d31d0723-e71d-4ec0-89e8-645a248d9add/setup-container/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.372373 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8ffdd64b-7gmkf_724e3890-930d-4492-8599-460add96a852/placement-log/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.484041 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d31d0723-e71d-4ec0-89e8-645a248d9add/setup-container/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.555919 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d31d0723-e71d-4ec0-89e8-645a248d9add/rabbitmq/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.660301 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c4628c3c-0ba5-4dcd-b4a9-003b5dc95119/setup-container/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.873226 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c4628c3c-0ba5-4dcd-b4a9-003b5dc95119/setup-container/0.log" Dec 12 05:48:25 crc kubenswrapper[4796]: I1212 05:48:25.942160 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c4628c3c-0ba5-4dcd-b4a9-003b5dc95119/rabbitmq/0.log" Dec 12 05:48:26 crc kubenswrapper[4796]: I1212 05:48:26.022844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l_e3c26ddb-8907-4b44-bc42-86138dc25d8b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:26 crc kubenswrapper[4796]: I1212 05:48:26.285408 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-hhdhc_47c5ed15-7a61-4101-b8f4-470f53ef2a10/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:26 crc kubenswrapper[4796]: I1212 05:48:26.459676 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7_ebb00117-c00a-49db-aeea-bcff226d7283/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:26 crc kubenswrapper[4796]: I1212 05:48:26.669743 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4m89q_925383e5-f552-447e-a749-b0337865ce48/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.025983 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-f95w9_d1178cb0-94fb-46a2-84b8-a67ed7e55856/ssh-known-hosts-edpm-deployment/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.243181 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58987c9f79-c2xlb_80ea0a4a-0715-4d5b-be0c-e11f00e6d743/proxy-httpd/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.287888 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x6qjf_db25cf2d-5a36-4289-b5d2-3a156acaee44/swift-ring-rebalance/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.289210 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58987c9f79-c2xlb_80ea0a4a-0715-4d5b-be0c-e11f00e6d743/proxy-server/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.583030 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-reaper/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.671600 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-auditor/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.684557 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-replicator/0.log" Dec 12 05:48:27 crc kubenswrapper[4796]: I1212 05:48:27.769012 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-server/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.273593 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-auditor/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.311812 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-server/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.401851 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-updater/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.431414 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-replicator/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.565569 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-auditor/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.651138 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-expirer/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.674355 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-replicator/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.741215 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-server/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.910831 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/rsync/0.log" Dec 12 05:48:28 crc kubenswrapper[4796]: I1212 05:48:28.959697 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-updater/0.log" Dec 12 05:48:29 crc kubenswrapper[4796]: I1212 05:48:29.065076 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/swift-recon-cron/0.log" Dec 12 05:48:29 crc kubenswrapper[4796]: I1212 05:48:29.260823 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-phst6_e3e68ee3-5e0b-4748-a03f-9c4d226b690c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:29 crc kubenswrapper[4796]: I1212 05:48:29.336842 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd/tempest-tests-tempest-tests-runner/0.log" Dec 12 05:48:29 crc kubenswrapper[4796]: I1212 05:48:29.599685 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e58842bb-2205-41c3-90d5-1d87ec35baf5/test-operator-logs-container/0.log" Dec 12 05:48:29 crc kubenswrapper[4796]: I1212 05:48:29.691999 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg_165cb754-40d9-41ec-abd3-1f5fbaeeb13c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:48:32 crc kubenswrapper[4796]: I1212 05:48:32.980147 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:48:32 crc kubenswrapper[4796]: I1212 05:48:32.980600 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:48:42 crc kubenswrapper[4796]: I1212 05:48:42.476893 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_497a4966-f578-46a2-a33c-c3288f96f7f1/memcached/0.log" Dec 12 05:49:02 crc kubenswrapper[4796]: I1212 05:49:02.970041 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:49:02 crc kubenswrapper[4796]: I1212 05:49:02.970621 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.230126 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/util/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.382676 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/util/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.392733 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/pull/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.402828 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/pull/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.714033 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/extract/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.736195 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/util/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.761583 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/pull/0.log" Dec 12 05:49:04 crc kubenswrapper[4796]: I1212 05:49:04.944647 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-f9h97_f4b37e55-be7c-467b-9739-e82c28f1916e/kube-rbac-proxy/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.091056 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-f9h97_f4b37e55-be7c-467b-9739-e82c28f1916e/manager/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.096267 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-27f9h_19b30665-06c6-48e5-8ec7-3eeaf3d3e72e/kube-rbac-proxy/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.255567 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-27f9h_19b30665-06c6-48e5-8ec7-3eeaf3d3e72e/manager/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.423429 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-qzrqh_565c4c89-1b44-462b-8307-15d3d0a6cf1f/kube-rbac-proxy/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.460865 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-qzrqh_565c4c89-1b44-462b-8307-15d3d0a6cf1f/manager/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.652509 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-mdmcv_22df48e7-88f5-43df-bdce-9116599bea1b/kube-rbac-proxy/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.764552 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-mdmcv_22df48e7-88f5-43df-bdce-9116599bea1b/manager/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.960477 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jqd2f_3092bc98-4221-47ff-bae0-06efcfa85522/kube-rbac-proxy/0.log" Dec 12 05:49:05 crc kubenswrapper[4796]: I1212 05:49:05.997245 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jqd2f_3092bc98-4221-47ff-bae0-06efcfa85522/manager/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.080518 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-5v7h7_035421c3-b1dd-48de-a195-04bfef7c5a0e/kube-rbac-proxy/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.239022 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-5v7h7_035421c3-b1dd-48de-a195-04bfef7c5a0e/manager/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.297211 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lzzhj_301fd006-5a61-46bd-b19f-bbd1ba8f7baf/kube-rbac-proxy/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.626208 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lzzhj_301fd006-5a61-46bd-b19f-bbd1ba8f7baf/manager/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.664690 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-w5fjz_c14d829a-f63e-404c-b117-65c0e15280e8/kube-rbac-proxy/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.694065 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-w5fjz_c14d829a-f63e-404c-b117-65c0e15280e8/manager/0.log" Dec 12 05:49:06 crc kubenswrapper[4796]: I1212 05:49:06.852081 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-6dsjv_b47de1f3-3223-47bb-a707-72ee23490049/kube-rbac-proxy/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.064303 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-6dsjv_b47de1f3-3223-47bb-a707-72ee23490049/manager/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.160518 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-47x2m_9fb465c9-338c-4755-ba24-b7985e57fa06/kube-rbac-proxy/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.315169 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-47x2m_9fb465c9-338c-4755-ba24-b7985e57fa06/manager/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.389540 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-ws54v_43ac4ab3-1f18-4b18-8a83-1561837988eb/kube-rbac-proxy/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.502432 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-ws54v_43ac4ab3-1f18-4b18-8a83-1561837988eb/manager/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.592720 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-2dggq_8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73/kube-rbac-proxy/0.log" Dec 12 05:49:07 crc kubenswrapper[4796]: I1212 05:49:07.714537 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-2dggq_8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73/manager/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.070685 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-n6m6j_a0340c55-5a39-4841-a602-694ef484e3ec/kube-rbac-proxy/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.198201 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-n6m6j_a0340c55-5a39-4841-a602-694ef484e3ec/manager/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.228447 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-kgq7g_6a645239-185e-4bfb-b8a8-9c442ae1c379/kube-rbac-proxy/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.354618 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-kgq7g_6a645239-185e-4bfb-b8a8-9c442ae1c379/manager/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.483015 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8bscf_bb156fa4-57d0-457f-be10-e9c013f37a84/manager/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.520145 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8bscf_bb156fa4-57d0-457f-be10-e9c013f37a84/kube-rbac-proxy/0.log" Dec 12 05:49:08 crc kubenswrapper[4796]: I1212 05:49:08.934994 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8jkpm_e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31/registry-server/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.163178 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d67f9f647-pf79m_2f68b517-0566-4a78-92bd-215a5b6e304b/operator/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.258356 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wdgv6_c3102315-cf09-47e4-b1b2-4721b38ac5b8/kube-rbac-proxy/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.358006 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wdgv6_c3102315-cf09-47e4-b1b2-4721b38ac5b8/manager/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.527734 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8k4ws_7f217e33-5880-42b4-931f-8a4633195ffc/manager/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.533087 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8k4ws_7f217e33-5880-42b4-931f-8a4633195ffc/kube-rbac-proxy/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.791074 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-rpkcq_252d73ba-87e9-492d-a9c4-2f8e4e8d66fa/kube-rbac-proxy/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.812114 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7775c45dbc-9fh7g_f2d005ee-450b-4029-bb3c-a5b389edc347/manager/0.log" Dec 12 05:49:09 crc kubenswrapper[4796]: I1212 05:49:09.829980 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jz9xq_0d5457f7-3a7d-4a0e-a733-33c78860c9b5/operator/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.431936 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-rpkcq_252d73ba-87e9-492d-a9c4-2f8e4e8d66fa/manager/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.524251 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-gvgzg_38f86aeb-2024-40b1-8c60-25c2c78ef7ac/kube-rbac-proxy/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.553619 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-gvgzg_38f86aeb-2024-40b1-8c60-25c2c78ef7ac/manager/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.665976 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-jh5td_bbeb9b29-5dc1-4cdf-94de-397cdb4a32de/kube-rbac-proxy/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.761003 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-jh5td_bbeb9b29-5dc1-4cdf-94de-397cdb4a32de/manager/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.803906 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-mt59j_cd307815-1f04-446d-a89b-60fa6574f0db/kube-rbac-proxy/0.log" Dec 12 05:49:10 crc kubenswrapper[4796]: I1212 05:49:10.883050 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-mt59j_cd307815-1f04-446d-a89b-60fa6574f0db/manager/0.log" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.688926 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rncm5"] Dec 12 05:49:24 crc kubenswrapper[4796]: E1212 05:49:24.689979 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8000f98e-01a1-47ff-8d54-c9cd1bb27165" containerName="container-00" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.689996 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8000f98e-01a1-47ff-8d54-c9cd1bb27165" containerName="container-00" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.690306 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8000f98e-01a1-47ff-8d54-c9cd1bb27165" containerName="container-00" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.694714 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.703041 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rncm5"] Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.827596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnrk\" (UniqueName: \"kubernetes.io/projected/6e7a3274-ccf5-475e-a325-bd32b3640a1b-kube-api-access-bhnrk\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.827685 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-utilities\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.827824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-catalog-content\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.929881 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnrk\" (UniqueName: \"kubernetes.io/projected/6e7a3274-ccf5-475e-a325-bd32b3640a1b-kube-api-access-bhnrk\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.930269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-utilities\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.930655 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-utilities\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.932160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-catalog-content\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.932609 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-catalog-content\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:24 crc kubenswrapper[4796]: I1212 05:49:24.949961 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnrk\" (UniqueName: \"kubernetes.io/projected/6e7a3274-ccf5-475e-a325-bd32b3640a1b-kube-api-access-bhnrk\") pod \"community-operators-rncm5\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:25 crc kubenswrapper[4796]: I1212 05:49:25.021184 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:25 crc kubenswrapper[4796]: I1212 05:49:25.878329 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rncm5"] Dec 12 05:49:26 crc kubenswrapper[4796]: I1212 05:49:26.114716 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerID="8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815" exitCode=0 Dec 12 05:49:26 crc kubenswrapper[4796]: I1212 05:49:26.114790 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerDied","Data":"8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815"} Dec 12 05:49:26 crc kubenswrapper[4796]: I1212 05:49:26.115369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerStarted","Data":"ebed9730732e9f00c0c7458d7833f58d26cbac06eae60a98c722be996263d5d8"} Dec 12 05:49:28 crc kubenswrapper[4796]: I1212 05:49:28.178322 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerStarted","Data":"a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177"} Dec 12 05:49:29 crc kubenswrapper[4796]: I1212 05:49:29.187943 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerID="a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177" exitCode=0 Dec 12 05:49:29 crc kubenswrapper[4796]: I1212 05:49:29.187981 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerDied","Data":"a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177"} Dec 12 05:49:30 crc kubenswrapper[4796]: I1212 05:49:30.199832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerStarted","Data":"351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15"} Dec 12 05:49:30 crc kubenswrapper[4796]: I1212 05:49:30.227332 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rncm5" podStartSLOduration=2.58325373 podStartE2EDuration="6.227310335s" podCreationTimestamp="2025-12-12 05:49:24 +0000 UTC" firstStartedPulling="2025-12-12 05:49:26.117453394 +0000 UTC m=+4556.993470551" lastFinishedPulling="2025-12-12 05:49:29.761510009 +0000 UTC m=+4560.637527156" observedRunningTime="2025-12-12 05:49:30.222833184 +0000 UTC m=+4561.098850331" watchObservedRunningTime="2025-12-12 05:49:30.227310335 +0000 UTC m=+4561.103327492" Dec 12 05:49:32 crc kubenswrapper[4796]: I1212 05:49:32.970158 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:49:32 crc kubenswrapper[4796]: I1212 05:49:32.970768 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:49:32 crc kubenswrapper[4796]: I1212 05:49:32.970967 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:49:32 crc kubenswrapper[4796]: I1212 05:49:32.972022 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:49:32 crc kubenswrapper[4796]: I1212 05:49:32.972121 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" gracePeriod=600 Dec 12 05:49:33 crc kubenswrapper[4796]: E1212 05:49:33.097819 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:49:33 crc kubenswrapper[4796]: I1212 05:49:33.225001 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" exitCode=0 Dec 12 05:49:33 crc kubenswrapper[4796]: I1212 05:49:33.225055 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b"} Dec 12 05:49:33 crc kubenswrapper[4796]: I1212 05:49:33.225101 4796 scope.go:117] "RemoveContainer" containerID="6cfd5f0abab7e1b9b74d6d43adf1c520251ed6b7b5b267ad9722c2e646315aef" Dec 12 05:49:33 crc kubenswrapper[4796]: I1212 05:49:33.225817 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:49:33 crc kubenswrapper[4796]: E1212 05:49:33.226079 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:49:35 crc kubenswrapper[4796]: I1212 05:49:35.021605 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:35 crc kubenswrapper[4796]: I1212 05:49:35.022268 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:35 crc kubenswrapper[4796]: I1212 05:49:35.221263 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:35 crc kubenswrapper[4796]: I1212 05:49:35.292104 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:35 crc kubenswrapper[4796]: I1212 05:49:35.462658 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rncm5"] Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.216237 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n86m7_89c07828-a1a4-4261-b744-fec105f01000/control-plane-machine-set-operator/0.log" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.257624 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rncm5" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="registry-server" containerID="cri-o://351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15" gracePeriod=2 Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.265147 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-45hnd_8e05fbfb-ba4c-465c-94a2-49f666f39c02/kube-rbac-proxy/0.log" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.515564 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-45hnd_8e05fbfb-ba4c-465c-94a2-49f666f39c02/machine-api-operator/0.log" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.764777 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.784532 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhnrk\" (UniqueName: \"kubernetes.io/projected/6e7a3274-ccf5-475e-a325-bd32b3640a1b-kube-api-access-bhnrk\") pod \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.784723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-catalog-content\") pod \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.784886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-utilities\") pod \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\" (UID: \"6e7a3274-ccf5-475e-a325-bd32b3640a1b\") " Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.786027 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-utilities" (OuterVolumeSpecName: "utilities") pod "6e7a3274-ccf5-475e-a325-bd32b3640a1b" (UID: "6e7a3274-ccf5-475e-a325-bd32b3640a1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.808420 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7a3274-ccf5-475e-a325-bd32b3640a1b-kube-api-access-bhnrk" (OuterVolumeSpecName: "kube-api-access-bhnrk") pod "6e7a3274-ccf5-475e-a325-bd32b3640a1b" (UID: "6e7a3274-ccf5-475e-a325-bd32b3640a1b"). InnerVolumeSpecName "kube-api-access-bhnrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.887509 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.887539 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhnrk\" (UniqueName: \"kubernetes.io/projected/6e7a3274-ccf5-475e-a325-bd32b3640a1b-kube-api-access-bhnrk\") on node \"crc\" DevicePath \"\"" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.891869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7a3274-ccf5-475e-a325-bd32b3640a1b" (UID: "6e7a3274-ccf5-475e-a325-bd32b3640a1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:49:37 crc kubenswrapper[4796]: I1212 05:49:37.989556 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7a3274-ccf5-475e-a325-bd32b3640a1b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.267326 4796 generic.go:334] "Generic (PLEG): container finished" podID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerID="351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15" exitCode=0 Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.267364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerDied","Data":"351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15"} Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.267391 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rncm5" event={"ID":"6e7a3274-ccf5-475e-a325-bd32b3640a1b","Type":"ContainerDied","Data":"ebed9730732e9f00c0c7458d7833f58d26cbac06eae60a98c722be996263d5d8"} Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.267388 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rncm5" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.267468 4796 scope.go:117] "RemoveContainer" containerID="351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.297243 4796 scope.go:117] "RemoveContainer" containerID="a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.301399 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rncm5"] Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.307955 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rncm5"] Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.318094 4796 scope.go:117] "RemoveContainer" containerID="8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.373114 4796 scope.go:117] "RemoveContainer" containerID="351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15" Dec 12 05:49:38 crc kubenswrapper[4796]: E1212 05:49:38.374531 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15\": container with ID starting with 351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15 not found: ID does not exist" containerID="351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.374576 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15"} err="failed to get container status \"351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15\": rpc error: code = NotFound desc = could not find container \"351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15\": container with ID starting with 351e97bda263a535baa7d845c956f7e6b6e598745730554bd1dade0ea502fd15 not found: ID does not exist" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.374603 4796 scope.go:117] "RemoveContainer" containerID="a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177" Dec 12 05:49:38 crc kubenswrapper[4796]: E1212 05:49:38.374910 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177\": container with ID starting with a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177 not found: ID does not exist" containerID="a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.374946 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177"} err="failed to get container status \"a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177\": rpc error: code = NotFound desc = could not find container \"a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177\": container with ID starting with a93bbe99d20b243f5988b88d3400119c14a122be5e9a8fb56b73c5310034e177 not found: ID does not exist" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.374969 4796 scope.go:117] "RemoveContainer" containerID="8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815" Dec 12 05:49:38 crc kubenswrapper[4796]: E1212 05:49:38.375217 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815\": container with ID starting with 8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815 not found: ID does not exist" containerID="8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815" Dec 12 05:49:38 crc kubenswrapper[4796]: I1212 05:49:38.375236 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815"} err="failed to get container status \"8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815\": rpc error: code = NotFound desc = could not find container \"8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815\": container with ID starting with 8bbb8dbedf0e0c62bcfadc82a9c4428613237ed09bbc805ac42a571400fca815 not found: ID does not exist" Dec 12 05:49:39 crc kubenswrapper[4796]: I1212 05:49:39.434344 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" path="/var/lib/kubelet/pods/6e7a3274-ccf5-475e-a325-bd32b3640a1b/volumes" Dec 12 05:49:44 crc kubenswrapper[4796]: I1212 05:49:44.411674 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:49:44 crc kubenswrapper[4796]: E1212 05:49:44.412423 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:49:52 crc kubenswrapper[4796]: I1212 05:49:52.608359 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ww44f_8860987b-111c-4cd3-b138-4cce9dce0ad8/cert-manager-controller/0.log" Dec 12 05:49:52 crc kubenswrapper[4796]: I1212 05:49:52.767064 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-k98g8_80433c18-6202-449a-8982-1d738afc9e14/cert-manager-cainjector/0.log" Dec 12 05:49:52 crc kubenswrapper[4796]: I1212 05:49:52.876196 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-lzcb5_5702c38e-0a66-415d-ba4a-6a32f7dbbc70/cert-manager-webhook/0.log" Dec 12 05:49:58 crc kubenswrapper[4796]: I1212 05:49:58.411263 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:49:58 crc kubenswrapper[4796]: E1212 05:49:58.412592 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:50:06 crc kubenswrapper[4796]: I1212 05:50:06.831420 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-jzd58_f1e50c4c-9467-4663-a305-6077b4dc5b1d/nmstate-console-plugin/0.log" Dec 12 05:50:07 crc kubenswrapper[4796]: I1212 05:50:07.014121 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-qrqkv_c918c9c3-c2d3-415d-9942-16385200a014/kube-rbac-proxy/0.log" Dec 12 05:50:07 crc kubenswrapper[4796]: I1212 05:50:07.086681 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lc8l9_284c3da0-54ab-47f6-960d-063c58c0f870/nmstate-handler/0.log" Dec 12 05:50:07 crc kubenswrapper[4796]: I1212 05:50:07.135768 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-qrqkv_c918c9c3-c2d3-415d-9942-16385200a014/nmstate-metrics/0.log" Dec 12 05:50:07 crc kubenswrapper[4796]: I1212 05:50:07.525942 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-w2zw4_3637ba14-6803-4897-9b95-09119916eaa5/nmstate-operator/0.log" Dec 12 05:50:07 crc kubenswrapper[4796]: I1212 05:50:07.594186 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-bd782_3d70cee4-8e4a-49fe-a0c1-e26a7452ba32/nmstate-webhook/0.log" Dec 12 05:50:12 crc kubenswrapper[4796]: I1212 05:50:12.411631 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:50:12 crc kubenswrapper[4796]: E1212 05:50:12.412237 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:50:23 crc kubenswrapper[4796]: I1212 05:50:23.440551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vm69t_dfbd0368-d699-416a-bf10-c6e5a6716c1a/kube-rbac-proxy/0.log" Dec 12 05:50:23 crc kubenswrapper[4796]: I1212 05:50:23.624171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vm69t_dfbd0368-d699-416a-bf10-c6e5a6716c1a/controller/0.log" Dec 12 05:50:23 crc kubenswrapper[4796]: I1212 05:50:23.756790 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:50:23 crc kubenswrapper[4796]: I1212 05:50:23.918484 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:50:23 crc kubenswrapper[4796]: I1212 05:50:23.949912 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.001231 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.034400 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.195357 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.234605 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.275426 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.334744 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.492944 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.545893 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.555532 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.567109 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/controller/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.722134 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/frr-metrics/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.781810 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/kube-rbac-proxy/0.log" Dec 12 05:50:24 crc kubenswrapper[4796]: I1212 05:50:24.847712 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/kube-rbac-proxy-frr/0.log" Dec 12 05:50:25 crc kubenswrapper[4796]: I1212 05:50:25.021506 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/reloader/0.log" Dec 12 05:50:25 crc kubenswrapper[4796]: I1212 05:50:25.191084 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-fpns2_7585dd28-35f6-4a54-b39c-9bdbecf98c13/frr-k8s-webhook-server/0.log" Dec 12 05:50:25 crc kubenswrapper[4796]: I1212 05:50:25.470837 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54b76c8dd-989lk_db1474b8-5eda-4d9e-8364-21082cc5d214/manager/0.log" Dec 12 05:50:25 crc kubenswrapper[4796]: I1212 05:50:25.688566 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c7ffbcf95-nsmtv_9d65581e-d568-49dc-9be0-4e4f06ce52e4/webhook-server/0.log" Dec 12 05:50:25 crc kubenswrapper[4796]: I1212 05:50:25.778519 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nvw2w_379b9266-5fd0-4c73-8d8e-376e85112dbd/kube-rbac-proxy/0.log" Dec 12 05:50:26 crc kubenswrapper[4796]: I1212 05:50:26.320856 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nvw2w_379b9266-5fd0-4c73-8d8e-376e85112dbd/speaker/0.log" Dec 12 05:50:26 crc kubenswrapper[4796]: I1212 05:50:26.357625 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/frr/0.log" Dec 12 05:50:26 crc kubenswrapper[4796]: I1212 05:50:26.412485 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:50:26 crc kubenswrapper[4796]: E1212 05:50:26.412735 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.169806 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/util/0.log" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.410827 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:50:40 crc kubenswrapper[4796]: E1212 05:50:40.411055 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.552750 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/util/0.log" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.563241 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/pull/0.log" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.600400 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/pull/0.log" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.811550 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/util/0.log" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.870801 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/pull/0.log" Dec 12 05:50:40 crc kubenswrapper[4796]: I1212 05:50:40.882556 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/extract/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.089410 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/util/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.254584 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/util/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.315937 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/pull/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.342910 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/pull/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.556064 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/util/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.627408 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/pull/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.640411 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/extract/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.732385 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-utilities/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.908890 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-content/0.log" Dec 12 05:50:41 crc kubenswrapper[4796]: I1212 05:50:41.950358 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-utilities/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.010141 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-content/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.221189 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-utilities/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.337405 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-content/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.461094 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-utilities/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.754820 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-utilities/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.833033 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-content/0.log" Dec 12 05:50:42 crc kubenswrapper[4796]: I1212 05:50:42.835325 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-content/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.041375 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/registry-server/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.052730 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-content/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.091510 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-utilities/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.291136 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sng6c_05678928-a7d3-4250-8454-abadf034f217/marketplace-operator/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.542932 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-utilities/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.940902 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-content/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.949581 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/registry-server/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.952976 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-content/0.log" Dec 12 05:50:43 crc kubenswrapper[4796]: I1212 05:50:43.967802 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-utilities/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.209602 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-utilities/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.240647 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-content/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.350062 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/registry-server/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.393628 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-utilities/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.569593 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-content/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.580500 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-content/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.597449 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-utilities/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.756894 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-content/0.log" Dec 12 05:50:44 crc kubenswrapper[4796]: I1212 05:50:44.758819 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-utilities/0.log" Dec 12 05:50:45 crc kubenswrapper[4796]: I1212 05:50:45.445435 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/registry-server/0.log" Dec 12 05:50:51 crc kubenswrapper[4796]: I1212 05:50:51.411493 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:50:51 crc kubenswrapper[4796]: E1212 05:50:51.412406 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:51:02 crc kubenswrapper[4796]: I1212 05:51:02.413414 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:51:02 crc kubenswrapper[4796]: E1212 05:51:02.414471 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:51:14 crc kubenswrapper[4796]: I1212 05:51:14.412024 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:51:14 crc kubenswrapper[4796]: E1212 05:51:14.412749 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:51:29 crc kubenswrapper[4796]: I1212 05:51:29.421395 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:51:29 crc kubenswrapper[4796]: E1212 05:51:29.422091 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:51:40 crc kubenswrapper[4796]: I1212 05:51:40.410958 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:51:40 crc kubenswrapper[4796]: E1212 05:51:40.411728 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:51:52 crc kubenswrapper[4796]: I1212 05:51:52.410966 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:51:52 crc kubenswrapper[4796]: E1212 05:51:52.412822 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:52:03 crc kubenswrapper[4796]: I1212 05:52:03.416500 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:52:03 crc kubenswrapper[4796]: E1212 05:52:03.418156 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:52:14 crc kubenswrapper[4796]: I1212 05:52:14.412142 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:52:14 crc kubenswrapper[4796]: E1212 05:52:14.413088 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:52:26 crc kubenswrapper[4796]: I1212 05:52:26.411154 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:52:26 crc kubenswrapper[4796]: E1212 05:52:26.412033 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:52:41 crc kubenswrapper[4796]: I1212 05:52:41.413181 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:52:41 crc kubenswrapper[4796]: E1212 05:52:41.414108 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:52:56 crc kubenswrapper[4796]: I1212 05:52:56.412513 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:52:56 crc kubenswrapper[4796]: E1212 05:52:56.413275 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:53:02 crc kubenswrapper[4796]: I1212 05:53:02.155201 4796 generic.go:334] "Generic (PLEG): container finished" podID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerID="34d188acd38e6cda61163af65da56837ad63784a214f6164d6836d9d732ca759" exitCode=0 Dec 12 05:53:02 crc kubenswrapper[4796]: I1212 05:53:02.155291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-687pd/must-gather-8jfrm" event={"ID":"9399ad7e-d3f0-41d6-bc32-bae910aab5ff","Type":"ContainerDied","Data":"34d188acd38e6cda61163af65da56837ad63784a214f6164d6836d9d732ca759"} Dec 12 05:53:02 crc kubenswrapper[4796]: I1212 05:53:02.156419 4796 scope.go:117] "RemoveContainer" containerID="34d188acd38e6cda61163af65da56837ad63784a214f6164d6836d9d732ca759" Dec 12 05:53:02 crc kubenswrapper[4796]: I1212 05:53:02.364823 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-687pd_must-gather-8jfrm_9399ad7e-d3f0-41d6-bc32-bae910aab5ff/gather/0.log" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.933056 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jtgkf"] Dec 12 05:53:04 crc kubenswrapper[4796]: E1212 05:53:04.935233 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="registry-server" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.935376 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="registry-server" Dec 12 05:53:04 crc kubenswrapper[4796]: E1212 05:53:04.935477 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="extract-content" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.935552 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="extract-content" Dec 12 05:53:04 crc kubenswrapper[4796]: E1212 05:53:04.935643 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="extract-utilities" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.935718 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="extract-utilities" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.936049 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7a3274-ccf5-475e-a325-bd32b3640a1b" containerName="registry-server" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.937942 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:04 crc kubenswrapper[4796]: I1212 05:53:04.970532 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtgkf"] Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.072837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-catalog-content\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.072880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-utilities\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.073123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzzc\" (UniqueName: \"kubernetes.io/projected/86e5210c-d799-4510-b7af-cd94b71ed532-kube-api-access-fgzzc\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.174378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzzc\" (UniqueName: \"kubernetes.io/projected/86e5210c-d799-4510-b7af-cd94b71ed532-kube-api-access-fgzzc\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.174546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-catalog-content\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.174579 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-utilities\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.175096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-utilities\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.175105 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-catalog-content\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.684266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzzc\" (UniqueName: \"kubernetes.io/projected/86e5210c-d799-4510-b7af-cd94b71ed532-kube-api-access-fgzzc\") pod \"redhat-marketplace-jtgkf\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:05 crc kubenswrapper[4796]: I1212 05:53:05.861467 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:06 crc kubenswrapper[4796]: I1212 05:53:06.317142 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtgkf"] Dec 12 05:53:07 crc kubenswrapper[4796]: I1212 05:53:07.197316 4796 generic.go:334] "Generic (PLEG): container finished" podID="86e5210c-d799-4510-b7af-cd94b71ed532" containerID="d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c" exitCode=0 Dec 12 05:53:07 crc kubenswrapper[4796]: I1212 05:53:07.197372 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtgkf" event={"ID":"86e5210c-d799-4510-b7af-cd94b71ed532","Type":"ContainerDied","Data":"d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c"} Dec 12 05:53:07 crc kubenswrapper[4796]: I1212 05:53:07.197576 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtgkf" event={"ID":"86e5210c-d799-4510-b7af-cd94b71ed532","Type":"ContainerStarted","Data":"49a822057a511dc3a46bfc46fa155829cb1e95cbe7f7cb3c3004d59a2a67ad01"} Dec 12 05:53:07 crc kubenswrapper[4796]: I1212 05:53:07.200101 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.739471 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cp8z9"] Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.743710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.752642 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cp8z9"] Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.859627 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-catalog-content\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.859683 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-utilities\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.859708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcsp\" (UniqueName: \"kubernetes.io/projected/577e665e-4551-4f7d-9c96-dcf9532b2682-kube-api-access-nrcsp\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.961107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-catalog-content\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.961176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-utilities\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.961203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrcsp\" (UniqueName: \"kubernetes.io/projected/577e665e-4551-4f7d-9c96-dcf9532b2682-kube-api-access-nrcsp\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.961588 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-catalog-content\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.961660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-utilities\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:08 crc kubenswrapper[4796]: I1212 05:53:08.983984 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrcsp\" (UniqueName: \"kubernetes.io/projected/577e665e-4551-4f7d-9c96-dcf9532b2682-kube-api-access-nrcsp\") pod \"certified-operators-cp8z9\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:09 crc kubenswrapper[4796]: I1212 05:53:09.065103 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:09 crc kubenswrapper[4796]: I1212 05:53:09.226211 4796 generic.go:334] "Generic (PLEG): container finished" podID="86e5210c-d799-4510-b7af-cd94b71ed532" containerID="adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e" exitCode=0 Dec 12 05:53:09 crc kubenswrapper[4796]: I1212 05:53:09.226541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtgkf" event={"ID":"86e5210c-d799-4510-b7af-cd94b71ed532","Type":"ContainerDied","Data":"adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e"} Dec 12 05:53:09 crc kubenswrapper[4796]: I1212 05:53:09.407217 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cp8z9"] Dec 12 05:53:09 crc kubenswrapper[4796]: I1212 05:53:09.419855 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:53:09 crc kubenswrapper[4796]: E1212 05:53:09.420077 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:53:10 crc kubenswrapper[4796]: I1212 05:53:10.241020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtgkf" event={"ID":"86e5210c-d799-4510-b7af-cd94b71ed532","Type":"ContainerStarted","Data":"40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077"} Dec 12 05:53:10 crc kubenswrapper[4796]: I1212 05:53:10.244432 4796 generic.go:334] "Generic (PLEG): container finished" podID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerID="0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7" exitCode=0 Dec 12 05:53:10 crc kubenswrapper[4796]: I1212 05:53:10.244476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerDied","Data":"0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7"} Dec 12 05:53:10 crc kubenswrapper[4796]: I1212 05:53:10.244502 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerStarted","Data":"85c4cee9e71f2af5aec233a248c52bc9f3a1d0a94acc35ae46b02a0f4af2c1ce"} Dec 12 05:53:10 crc kubenswrapper[4796]: I1212 05:53:10.265668 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jtgkf" podStartSLOduration=3.6940757939999997 podStartE2EDuration="6.265652406s" podCreationTimestamp="2025-12-12 05:53:04 +0000 UTC" firstStartedPulling="2025-12-12 05:53:07.199879801 +0000 UTC m=+4778.075896948" lastFinishedPulling="2025-12-12 05:53:09.771456413 +0000 UTC m=+4780.647473560" observedRunningTime="2025-12-12 05:53:10.259225913 +0000 UTC m=+4781.135243060" watchObservedRunningTime="2025-12-12 05:53:10.265652406 +0000 UTC m=+4781.141669553" Dec 12 05:53:11 crc kubenswrapper[4796]: I1212 05:53:11.261363 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerStarted","Data":"7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699"} Dec 12 05:53:11 crc kubenswrapper[4796]: I1212 05:53:11.784777 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-687pd/must-gather-8jfrm"] Dec 12 05:53:11 crc kubenswrapper[4796]: I1212 05:53:11.785378 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-687pd/must-gather-8jfrm" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="copy" containerID="cri-o://0340187e3cc100a7b49b63bfd4c53eef80164932adc4e72dc262d0bb6946a1b7" gracePeriod=2 Dec 12 05:53:11 crc kubenswrapper[4796]: I1212 05:53:11.794998 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-687pd/must-gather-8jfrm"] Dec 12 05:53:12 crc kubenswrapper[4796]: I1212 05:53:12.288465 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-687pd_must-gather-8jfrm_9399ad7e-d3f0-41d6-bc32-bae910aab5ff/copy/0.log" Dec 12 05:53:12 crc kubenswrapper[4796]: I1212 05:53:12.290112 4796 generic.go:334] "Generic (PLEG): container finished" podID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerID="0340187e3cc100a7b49b63bfd4c53eef80164932adc4e72dc262d0bb6946a1b7" exitCode=143 Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.299610 4796 generic.go:334] "Generic (PLEG): container finished" podID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerID="7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699" exitCode=0 Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.299651 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerDied","Data":"7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699"} Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.558897 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-687pd_must-gather-8jfrm_9399ad7e-d3f0-41d6-bc32-bae910aab5ff/copy/0.log" Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.559595 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.651994 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wxxs\" (UniqueName: \"kubernetes.io/projected/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-kube-api-access-6wxxs\") pod \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.652050 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-must-gather-output\") pod \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\" (UID: \"9399ad7e-d3f0-41d6-bc32-bae910aab5ff\") " Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.673210 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-kube-api-access-6wxxs" (OuterVolumeSpecName: "kube-api-access-6wxxs") pod "9399ad7e-d3f0-41d6-bc32-bae910aab5ff" (UID: "9399ad7e-d3f0-41d6-bc32-bae910aab5ff"). InnerVolumeSpecName "kube-api-access-6wxxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.754124 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wxxs\" (UniqueName: \"kubernetes.io/projected/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-kube-api-access-6wxxs\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.799377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9399ad7e-d3f0-41d6-bc32-bae910aab5ff" (UID: "9399ad7e-d3f0-41d6-bc32-bae910aab5ff"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:53:13 crc kubenswrapper[4796]: I1212 05:53:13.857123 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9399ad7e-d3f0-41d6-bc32-bae910aab5ff-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:14 crc kubenswrapper[4796]: I1212 05:53:14.308538 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-687pd_must-gather-8jfrm_9399ad7e-d3f0-41d6-bc32-bae910aab5ff/copy/0.log" Dec 12 05:53:14 crc kubenswrapper[4796]: I1212 05:53:14.308989 4796 scope.go:117] "RemoveContainer" containerID="0340187e3cc100a7b49b63bfd4c53eef80164932adc4e72dc262d0bb6946a1b7" Dec 12 05:53:14 crc kubenswrapper[4796]: I1212 05:53:14.309154 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-687pd/must-gather-8jfrm" Dec 12 05:53:14 crc kubenswrapper[4796]: I1212 05:53:14.312507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerStarted","Data":"fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce"} Dec 12 05:53:14 crc kubenswrapper[4796]: I1212 05:53:14.331219 4796 scope.go:117] "RemoveContainer" containerID="34d188acd38e6cda61163af65da56837ad63784a214f6164d6836d9d732ca759" Dec 12 05:53:14 crc kubenswrapper[4796]: I1212 05:53:14.344645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cp8z9" podStartSLOduration=2.760559292 podStartE2EDuration="6.344621591s" podCreationTimestamp="2025-12-12 05:53:08 +0000 UTC" firstStartedPulling="2025-12-12 05:53:10.246739209 +0000 UTC m=+4781.122756356" lastFinishedPulling="2025-12-12 05:53:13.830801508 +0000 UTC m=+4784.706818655" observedRunningTime="2025-12-12 05:53:14.339598903 +0000 UTC m=+4785.215616050" watchObservedRunningTime="2025-12-12 05:53:14.344621591 +0000 UTC m=+4785.220638738" Dec 12 05:53:15 crc kubenswrapper[4796]: I1212 05:53:15.420776 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" path="/var/lib/kubelet/pods/9399ad7e-d3f0-41d6-bc32-bae910aab5ff/volumes" Dec 12 05:53:15 crc kubenswrapper[4796]: I1212 05:53:15.862593 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:15 crc kubenswrapper[4796]: I1212 05:53:15.862886 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:15 crc kubenswrapper[4796]: I1212 05:53:15.908790 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:16 crc kubenswrapper[4796]: I1212 05:53:16.382062 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:17 crc kubenswrapper[4796]: I1212 05:53:17.218955 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtgkf"] Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.348717 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jtgkf" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="registry-server" containerID="cri-o://40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077" gracePeriod=2 Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.795174 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.953458 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgzzc\" (UniqueName: \"kubernetes.io/projected/86e5210c-d799-4510-b7af-cd94b71ed532-kube-api-access-fgzzc\") pod \"86e5210c-d799-4510-b7af-cd94b71ed532\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.953639 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-utilities\") pod \"86e5210c-d799-4510-b7af-cd94b71ed532\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.953710 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-catalog-content\") pod \"86e5210c-d799-4510-b7af-cd94b71ed532\" (UID: \"86e5210c-d799-4510-b7af-cd94b71ed532\") " Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.954342 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-utilities" (OuterVolumeSpecName: "utilities") pod "86e5210c-d799-4510-b7af-cd94b71ed532" (UID: "86e5210c-d799-4510-b7af-cd94b71ed532"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.962486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e5210c-d799-4510-b7af-cd94b71ed532-kube-api-access-fgzzc" (OuterVolumeSpecName: "kube-api-access-fgzzc") pod "86e5210c-d799-4510-b7af-cd94b71ed532" (UID: "86e5210c-d799-4510-b7af-cd94b71ed532"). InnerVolumeSpecName "kube-api-access-fgzzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:53:18 crc kubenswrapper[4796]: I1212 05:53:18.983946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86e5210c-d799-4510-b7af-cd94b71ed532" (UID: "86e5210c-d799-4510-b7af-cd94b71ed532"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.056914 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.057426 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e5210c-d799-4510-b7af-cd94b71ed532-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.057535 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgzzc\" (UniqueName: \"kubernetes.io/projected/86e5210c-d799-4510-b7af-cd94b71ed532-kube-api-access-fgzzc\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.065868 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.066021 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.112544 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.358464 4796 generic.go:334] "Generic (PLEG): container finished" podID="86e5210c-d799-4510-b7af-cd94b71ed532" containerID="40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077" exitCode=0 Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.358525 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtgkf" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.358562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtgkf" event={"ID":"86e5210c-d799-4510-b7af-cd94b71ed532","Type":"ContainerDied","Data":"40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077"} Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.358601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtgkf" event={"ID":"86e5210c-d799-4510-b7af-cd94b71ed532","Type":"ContainerDied","Data":"49a822057a511dc3a46bfc46fa155829cb1e95cbe7f7cb3c3004d59a2a67ad01"} Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.358635 4796 scope.go:117] "RemoveContainer" containerID="40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.407466 4796 scope.go:117] "RemoveContainer" containerID="adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.407669 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtgkf"] Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.428898 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtgkf"] Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.429024 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.429428 4796 scope.go:117] "RemoveContainer" containerID="d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.474796 4796 scope.go:117] "RemoveContainer" containerID="40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077" Dec 12 05:53:19 crc kubenswrapper[4796]: E1212 05:53:19.475165 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077\": container with ID starting with 40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077 not found: ID does not exist" containerID="40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.475214 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077"} err="failed to get container status \"40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077\": rpc error: code = NotFound desc = could not find container \"40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077\": container with ID starting with 40a9326e6556d5082cd3371a6fd8a78af628c00055fd3e62b85b57b1aca91077 not found: ID does not exist" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.475238 4796 scope.go:117] "RemoveContainer" containerID="adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e" Dec 12 05:53:19 crc kubenswrapper[4796]: E1212 05:53:19.475539 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e\": container with ID starting with adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e not found: ID does not exist" containerID="adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.475570 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e"} err="failed to get container status \"adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e\": rpc error: code = NotFound desc = could not find container \"adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e\": container with ID starting with adb46433123f402b3262a66461dc4f563c3dd7ad1bd40bca5450806dc858ad4e not found: ID does not exist" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.475603 4796 scope.go:117] "RemoveContainer" containerID="d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c" Dec 12 05:53:19 crc kubenswrapper[4796]: E1212 05:53:19.475791 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c\": container with ID starting with d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c not found: ID does not exist" containerID="d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c" Dec 12 05:53:19 crc kubenswrapper[4796]: I1212 05:53:19.475811 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c"} err="failed to get container status \"d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c\": rpc error: code = NotFound desc = could not find container \"d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c\": container with ID starting with d605e7e3dcb5fbfb70e556064a1c57589d185db1d0bc448830e2965db2ca2a3c not found: ID does not exist" Dec 12 05:53:21 crc kubenswrapper[4796]: I1212 05:53:21.425249 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" path="/var/lib/kubelet/pods/86e5210c-d799-4510-b7af-cd94b71ed532/volumes" Dec 12 05:53:21 crc kubenswrapper[4796]: I1212 05:53:21.521114 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cp8z9"] Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.387095 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cp8z9" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="registry-server" containerID="cri-o://fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce" gracePeriod=2 Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.859195 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.926665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-utilities\") pod \"577e665e-4551-4f7d-9c96-dcf9532b2682\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.926984 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-catalog-content\") pod \"577e665e-4551-4f7d-9c96-dcf9532b2682\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.927131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrcsp\" (UniqueName: \"kubernetes.io/projected/577e665e-4551-4f7d-9c96-dcf9532b2682-kube-api-access-nrcsp\") pod \"577e665e-4551-4f7d-9c96-dcf9532b2682\" (UID: \"577e665e-4551-4f7d-9c96-dcf9532b2682\") " Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.927855 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-utilities" (OuterVolumeSpecName: "utilities") pod "577e665e-4551-4f7d-9c96-dcf9532b2682" (UID: "577e665e-4551-4f7d-9c96-dcf9532b2682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.943566 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577e665e-4551-4f7d-9c96-dcf9532b2682-kube-api-access-nrcsp" (OuterVolumeSpecName: "kube-api-access-nrcsp") pod "577e665e-4551-4f7d-9c96-dcf9532b2682" (UID: "577e665e-4551-4f7d-9c96-dcf9532b2682"). InnerVolumeSpecName "kube-api-access-nrcsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:53:22 crc kubenswrapper[4796]: I1212 05:53:22.989186 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "577e665e-4551-4f7d-9c96-dcf9532b2682" (UID: "577e665e-4551-4f7d-9c96-dcf9532b2682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.028732 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.028770 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577e665e-4551-4f7d-9c96-dcf9532b2682-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.028785 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrcsp\" (UniqueName: \"kubernetes.io/projected/577e665e-4551-4f7d-9c96-dcf9532b2682-kube-api-access-nrcsp\") on node \"crc\" DevicePath \"\"" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.397705 4796 generic.go:334] "Generic (PLEG): container finished" podID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerID="fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce" exitCode=0 Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.397755 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerDied","Data":"fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce"} Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.397767 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp8z9" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.397785 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp8z9" event={"ID":"577e665e-4551-4f7d-9c96-dcf9532b2682","Type":"ContainerDied","Data":"85c4cee9e71f2af5aec233a248c52bc9f3a1d0a94acc35ae46b02a0f4af2c1ce"} Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.397806 4796 scope.go:117] "RemoveContainer" containerID="fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.414330 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:53:23 crc kubenswrapper[4796]: E1212 05:53:23.414865 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.416773 4796 scope.go:117] "RemoveContainer" containerID="7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.457375 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cp8z9"] Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.457529 4796 scope.go:117] "RemoveContainer" containerID="0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.477943 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cp8z9"] Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.516089 4796 scope.go:117] "RemoveContainer" containerID="fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce" Dec 12 05:53:23 crc kubenswrapper[4796]: E1212 05:53:23.520313 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce\": container with ID starting with fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce not found: ID does not exist" containerID="fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.520356 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce"} err="failed to get container status \"fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce\": rpc error: code = NotFound desc = could not find container \"fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce\": container with ID starting with fc53a58eb8581791a022b973f402f3ada4df994570d0c6e248b8bbe7d4e24bce not found: ID does not exist" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.520386 4796 scope.go:117] "RemoveContainer" containerID="7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699" Dec 12 05:53:23 crc kubenswrapper[4796]: E1212 05:53:23.520735 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699\": container with ID starting with 7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699 not found: ID does not exist" containerID="7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.520767 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699"} err="failed to get container status \"7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699\": rpc error: code = NotFound desc = could not find container \"7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699\": container with ID starting with 7d7875f3e683e38423187ce7d3664d4424b8f4381bcf29714b492ac5f25d4699 not found: ID does not exist" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.520787 4796 scope.go:117] "RemoveContainer" containerID="0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7" Dec 12 05:53:23 crc kubenswrapper[4796]: E1212 05:53:23.522805 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7\": container with ID starting with 0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7 not found: ID does not exist" containerID="0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7" Dec 12 05:53:23 crc kubenswrapper[4796]: I1212 05:53:23.522836 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7"} err="failed to get container status \"0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7\": rpc error: code = NotFound desc = could not find container \"0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7\": container with ID starting with 0c80e95174f1b52d10cd51050a66cb393ededc0b9471e470d3b8d134ecc8bbf7 not found: ID does not exist" Dec 12 05:53:25 crc kubenswrapper[4796]: I1212 05:53:25.425926 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" path="/var/lib/kubelet/pods/577e665e-4551-4f7d-9c96-dcf9532b2682/volumes" Dec 12 05:53:37 crc kubenswrapper[4796]: I1212 05:53:37.411563 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:53:37 crc kubenswrapper[4796]: E1212 05:53:37.412298 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:53:48 crc kubenswrapper[4796]: I1212 05:53:48.411415 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:53:48 crc kubenswrapper[4796]: E1212 05:53:48.412146 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:54:02 crc kubenswrapper[4796]: I1212 05:54:02.411567 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:54:02 crc kubenswrapper[4796]: E1212 05:54:02.412835 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:54:04 crc kubenswrapper[4796]: I1212 05:54:04.274415 4796 scope.go:117] "RemoveContainer" containerID="dca5bc49dc613fdb4d5445965fdd6bda6fd623a3d791ee990b68764557cc07f3" Dec 12 05:54:14 crc kubenswrapper[4796]: I1212 05:54:14.412597 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:54:14 crc kubenswrapper[4796]: E1212 05:54:14.413828 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:54:25 crc kubenswrapper[4796]: I1212 05:54:25.411869 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:54:25 crc kubenswrapper[4796]: E1212 05:54:25.412902 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 05:54:39 crc kubenswrapper[4796]: I1212 05:54:39.417433 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:54:40 crc kubenswrapper[4796]: I1212 05:54:40.120128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"a6b764c514fecf5f6c2879d304e8e05a77be00b027192f025422fedf7b566b48"} Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.208634 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9xz97/must-gather-gj8nj"] Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.210671 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="extract-content" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.219553 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="extract-content" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.219746 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="registry-server" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.219831 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="registry-server" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.219918 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="extract-utilities" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.219994 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="extract-utilities" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.220083 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="gather" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.220176 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="gather" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.220257 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="registry-server" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.220356 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="registry-server" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.220452 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="extract-utilities" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.220537 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="extract-utilities" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.220626 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="extract-content" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.220701 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="extract-content" Dec 12 05:56:11 crc kubenswrapper[4796]: E1212 05:56:11.220778 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="copy" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.220848 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="copy" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.221321 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="copy" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.221418 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5210c-d799-4510-b7af-cd94b71ed532" containerName="registry-server" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.221499 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9399ad7e-d3f0-41d6-bc32-bae910aab5ff" containerName="gather" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.221590 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="577e665e-4551-4f7d-9c96-dcf9532b2682" containerName="registry-server" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.222927 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.228882 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9xz97"/"openshift-service-ca.crt" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.231443 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9xz97"/"default-dockercfg-ffwqd" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.235769 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9xz97"/"kube-root-ca.crt" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.280799 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9xz97/must-gather-gj8nj"] Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.399678 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-must-gather-output\") pod \"must-gather-gj8nj\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.399851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlsnv\" (UniqueName: \"kubernetes.io/projected/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-kube-api-access-vlsnv\") pod \"must-gather-gj8nj\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.501199 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlsnv\" (UniqueName: \"kubernetes.io/projected/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-kube-api-access-vlsnv\") pod \"must-gather-gj8nj\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.501366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-must-gather-output\") pod \"must-gather-gj8nj\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.501924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-must-gather-output\") pod \"must-gather-gj8nj\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.526739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlsnv\" (UniqueName: \"kubernetes.io/projected/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-kube-api-access-vlsnv\") pod \"must-gather-gj8nj\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:11 crc kubenswrapper[4796]: I1212 05:56:11.557468 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 05:56:12 crc kubenswrapper[4796]: I1212 05:56:12.075672 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9xz97/must-gather-gj8nj"] Dec 12 05:56:13 crc kubenswrapper[4796]: I1212 05:56:13.017732 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/must-gather-gj8nj" event={"ID":"bcb0f8fd-59e3-4053-8f8d-6a30256e1491","Type":"ContainerStarted","Data":"cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6"} Dec 12 05:56:13 crc kubenswrapper[4796]: I1212 05:56:13.018333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/must-gather-gj8nj" event={"ID":"bcb0f8fd-59e3-4053-8f8d-6a30256e1491","Type":"ContainerStarted","Data":"475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b"} Dec 12 05:56:13 crc kubenswrapper[4796]: I1212 05:56:13.018354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/must-gather-gj8nj" event={"ID":"bcb0f8fd-59e3-4053-8f8d-6a30256e1491","Type":"ContainerStarted","Data":"8b774a529a737943e81493010994c8b331ce40629144e2ce96362dded8573bde"} Dec 12 05:56:13 crc kubenswrapper[4796]: I1212 05:56:13.068581 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9xz97/must-gather-gj8nj" podStartSLOduration=2.068563026 podStartE2EDuration="2.068563026s" podCreationTimestamp="2025-12-12 05:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:56:13.065576712 +0000 UTC m=+4963.941593869" watchObservedRunningTime="2025-12-12 05:56:13.068563026 +0000 UTC m=+4963.944580173" Dec 12 05:56:16 crc kubenswrapper[4796]: I1212 05:56:16.831136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9xz97/crc-debug-4m2kx"] Dec 12 05:56:16 crc kubenswrapper[4796]: I1212 05:56:16.833573 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:16 crc kubenswrapper[4796]: I1212 05:56:16.920601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17ca0046-4679-45b3-8578-115aea64763b-host\") pod \"crc-debug-4m2kx\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:16 crc kubenswrapper[4796]: I1212 05:56:16.920681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5l6\" (UniqueName: \"kubernetes.io/projected/17ca0046-4679-45b3-8578-115aea64763b-kube-api-access-nc5l6\") pod \"crc-debug-4m2kx\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:17 crc kubenswrapper[4796]: I1212 05:56:17.023113 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17ca0046-4679-45b3-8578-115aea64763b-host\") pod \"crc-debug-4m2kx\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:17 crc kubenswrapper[4796]: I1212 05:56:17.023198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5l6\" (UniqueName: \"kubernetes.io/projected/17ca0046-4679-45b3-8578-115aea64763b-kube-api-access-nc5l6\") pod \"crc-debug-4m2kx\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:17 crc kubenswrapper[4796]: I1212 05:56:17.023673 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17ca0046-4679-45b3-8578-115aea64763b-host\") pod \"crc-debug-4m2kx\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:17 crc kubenswrapper[4796]: I1212 05:56:17.053907 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5l6\" (UniqueName: \"kubernetes.io/projected/17ca0046-4679-45b3-8578-115aea64763b-kube-api-access-nc5l6\") pod \"crc-debug-4m2kx\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:17 crc kubenswrapper[4796]: I1212 05:56:17.154323 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:56:17 crc kubenswrapper[4796]: W1212 05:56:17.212245 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17ca0046_4679_45b3_8578_115aea64763b.slice/crio-8e1d88d5c723597abd41261dfb18063aea899bd4d799f793952ffcc34ce0eab7 WatchSource:0}: Error finding container 8e1d88d5c723597abd41261dfb18063aea899bd4d799f793952ffcc34ce0eab7: Status 404 returned error can't find the container with id 8e1d88d5c723597abd41261dfb18063aea899bd4d799f793952ffcc34ce0eab7 Dec 12 05:56:18 crc kubenswrapper[4796]: I1212 05:56:18.080804 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" event={"ID":"17ca0046-4679-45b3-8578-115aea64763b","Type":"ContainerStarted","Data":"a185fc9c2a90fa61349fed6502f84d44562c7e552dd5c708b577e9bad9cedb75"} Dec 12 05:56:18 crc kubenswrapper[4796]: I1212 05:56:18.082198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" event={"ID":"17ca0046-4679-45b3-8578-115aea64763b","Type":"ContainerStarted","Data":"8e1d88d5c723597abd41261dfb18063aea899bd4d799f793952ffcc34ce0eab7"} Dec 12 05:56:18 crc kubenswrapper[4796]: I1212 05:56:18.110122 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" podStartSLOduration=2.110105947 podStartE2EDuration="2.110105947s" podCreationTimestamp="2025-12-12 05:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:56:18.103654164 +0000 UTC m=+4968.979671321" watchObservedRunningTime="2025-12-12 05:56:18.110105947 +0000 UTC m=+4968.986123094" Dec 12 05:56:59 crc kubenswrapper[4796]: I1212 05:56:59.415925 4796 generic.go:334] "Generic (PLEG): container finished" podID="17ca0046-4679-45b3-8578-115aea64763b" containerID="a185fc9c2a90fa61349fed6502f84d44562c7e552dd5c708b577e9bad9cedb75" exitCode=0 Dec 12 05:56:59 crc kubenswrapper[4796]: I1212 05:56:59.423452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" event={"ID":"17ca0046-4679-45b3-8578-115aea64763b","Type":"ContainerDied","Data":"a185fc9c2a90fa61349fed6502f84d44562c7e552dd5c708b577e9bad9cedb75"} Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.527697 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.566390 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9xz97/crc-debug-4m2kx"] Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.571741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc5l6\" (UniqueName: \"kubernetes.io/projected/17ca0046-4679-45b3-8578-115aea64763b-kube-api-access-nc5l6\") pod \"17ca0046-4679-45b3-8578-115aea64763b\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.571872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17ca0046-4679-45b3-8578-115aea64763b-host\") pod \"17ca0046-4679-45b3-8578-115aea64763b\" (UID: \"17ca0046-4679-45b3-8578-115aea64763b\") " Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.572478 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17ca0046-4679-45b3-8578-115aea64763b-host" (OuterVolumeSpecName: "host") pod "17ca0046-4679-45b3-8578-115aea64763b" (UID: "17ca0046-4679-45b3-8578-115aea64763b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.578504 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9xz97/crc-debug-4m2kx"] Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.587095 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ca0046-4679-45b3-8578-115aea64763b-kube-api-access-nc5l6" (OuterVolumeSpecName: "kube-api-access-nc5l6") pod "17ca0046-4679-45b3-8578-115aea64763b" (UID: "17ca0046-4679-45b3-8578-115aea64763b"). InnerVolumeSpecName "kube-api-access-nc5l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.673632 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17ca0046-4679-45b3-8578-115aea64763b-host\") on node \"crc\" DevicePath \"\"" Dec 12 05:57:00 crc kubenswrapper[4796]: I1212 05:57:00.673667 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc5l6\" (UniqueName: \"kubernetes.io/projected/17ca0046-4679-45b3-8578-115aea64763b-kube-api-access-nc5l6\") on node \"crc\" DevicePath \"\"" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.421235 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ca0046-4679-45b3-8578-115aea64763b" path="/var/lib/kubelet/pods/17ca0046-4679-45b3-8578-115aea64763b/volumes" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.435010 4796 scope.go:117] "RemoveContainer" containerID="a185fc9c2a90fa61349fed6502f84d44562c7e552dd5c708b577e9bad9cedb75" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.435184 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-4m2kx" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.776882 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9xz97/crc-debug-r2gcz"] Dec 12 05:57:01 crc kubenswrapper[4796]: E1212 05:57:01.777657 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ca0046-4679-45b3-8578-115aea64763b" containerName="container-00" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.777672 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ca0046-4679-45b3-8578-115aea64763b" containerName="container-00" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.777932 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ca0046-4679-45b3-8578-115aea64763b" containerName="container-00" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.778701 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.794410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxxq\" (UniqueName: \"kubernetes.io/projected/9151b261-321d-430a-840b-fa37b16ac424-kube-api-access-7xxxq\") pod \"crc-debug-r2gcz\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.794451 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151b261-321d-430a-840b-fa37b16ac424-host\") pod \"crc-debug-r2gcz\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.895606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xxxq\" (UniqueName: \"kubernetes.io/projected/9151b261-321d-430a-840b-fa37b16ac424-kube-api-access-7xxxq\") pod \"crc-debug-r2gcz\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.895665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151b261-321d-430a-840b-fa37b16ac424-host\") pod \"crc-debug-r2gcz\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.895823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151b261-321d-430a-840b-fa37b16ac424-host\") pod \"crc-debug-r2gcz\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:01 crc kubenswrapper[4796]: I1212 05:57:01.917178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xxxq\" (UniqueName: \"kubernetes.io/projected/9151b261-321d-430a-840b-fa37b16ac424-kube-api-access-7xxxq\") pod \"crc-debug-r2gcz\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:02 crc kubenswrapper[4796]: I1212 05:57:02.097079 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:02 crc kubenswrapper[4796]: I1212 05:57:02.447157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" event={"ID":"9151b261-321d-430a-840b-fa37b16ac424","Type":"ContainerStarted","Data":"9e72a164ee4571f9e1860c5cb2f401ebcde7b5060dc5b1984bb9e1c7cab1f211"} Dec 12 05:57:02 crc kubenswrapper[4796]: I1212 05:57:02.447488 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" event={"ID":"9151b261-321d-430a-840b-fa37b16ac424","Type":"ContainerStarted","Data":"fba288f119fe7849ff7f0837c4222641b482bcde82fb9e3062b27ef58fd3c1a9"} Dec 12 05:57:02 crc kubenswrapper[4796]: I1212 05:57:02.468948 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" podStartSLOduration=1.468922929 podStartE2EDuration="1.468922929s" podCreationTimestamp="2025-12-12 05:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 05:57:02.460148274 +0000 UTC m=+5013.336165421" watchObservedRunningTime="2025-12-12 05:57:02.468922929 +0000 UTC m=+5013.344940076" Dec 12 05:57:02 crc kubenswrapper[4796]: I1212 05:57:02.969742 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:57:02 crc kubenswrapper[4796]: I1212 05:57:02.969792 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:57:03 crc kubenswrapper[4796]: I1212 05:57:03.457435 4796 generic.go:334] "Generic (PLEG): container finished" podID="9151b261-321d-430a-840b-fa37b16ac424" containerID="9e72a164ee4571f9e1860c5cb2f401ebcde7b5060dc5b1984bb9e1c7cab1f211" exitCode=0 Dec 12 05:57:03 crc kubenswrapper[4796]: I1212 05:57:03.457631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" event={"ID":"9151b261-321d-430a-840b-fa37b16ac424","Type":"ContainerDied","Data":"9e72a164ee4571f9e1860c5cb2f401ebcde7b5060dc5b1984bb9e1c7cab1f211"} Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.572861 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.678226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151b261-321d-430a-840b-fa37b16ac424-host\") pod \"9151b261-321d-430a-840b-fa37b16ac424\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.678593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xxxq\" (UniqueName: \"kubernetes.io/projected/9151b261-321d-430a-840b-fa37b16ac424-kube-api-access-7xxxq\") pod \"9151b261-321d-430a-840b-fa37b16ac424\" (UID: \"9151b261-321d-430a-840b-fa37b16ac424\") " Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.679675 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9151b261-321d-430a-840b-fa37b16ac424-host" (OuterVolumeSpecName: "host") pod "9151b261-321d-430a-840b-fa37b16ac424" (UID: "9151b261-321d-430a-840b-fa37b16ac424"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.684170 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9151b261-321d-430a-840b-fa37b16ac424-kube-api-access-7xxxq" (OuterVolumeSpecName: "kube-api-access-7xxxq") pod "9151b261-321d-430a-840b-fa37b16ac424" (UID: "9151b261-321d-430a-840b-fa37b16ac424"). InnerVolumeSpecName "kube-api-access-7xxxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.780308 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151b261-321d-430a-840b-fa37b16ac424-host\") on node \"crc\" DevicePath \"\"" Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.780340 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xxxq\" (UniqueName: \"kubernetes.io/projected/9151b261-321d-430a-840b-fa37b16ac424-kube-api-access-7xxxq\") on node \"crc\" DevicePath \"\"" Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.868073 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9xz97/crc-debug-r2gcz"] Dec 12 05:57:04 crc kubenswrapper[4796]: I1212 05:57:04.879926 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9xz97/crc-debug-r2gcz"] Dec 12 05:57:05 crc kubenswrapper[4796]: I1212 05:57:05.432749 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9151b261-321d-430a-840b-fa37b16ac424" path="/var/lib/kubelet/pods/9151b261-321d-430a-840b-fa37b16ac424/volumes" Dec 12 05:57:05 crc kubenswrapper[4796]: I1212 05:57:05.477126 4796 scope.go:117] "RemoveContainer" containerID="9e72a164ee4571f9e1860c5cb2f401ebcde7b5060dc5b1984bb9e1c7cab1f211" Dec 12 05:57:05 crc kubenswrapper[4796]: I1212 05:57:05.477149 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-r2gcz" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.093450 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9xz97/crc-debug-w2cmp"] Dec 12 05:57:06 crc kubenswrapper[4796]: E1212 05:57:06.094149 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9151b261-321d-430a-840b-fa37b16ac424" containerName="container-00" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.094162 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9151b261-321d-430a-840b-fa37b16ac424" containerName="container-00" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.094371 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9151b261-321d-430a-840b-fa37b16ac424" containerName="container-00" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.094990 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.209317 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxh8k\" (UniqueName: \"kubernetes.io/projected/0cf679c8-cd9a-42aa-9743-d6cd07283f03-kube-api-access-hxh8k\") pod \"crc-debug-w2cmp\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.209458 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cf679c8-cd9a-42aa-9743-d6cd07283f03-host\") pod \"crc-debug-w2cmp\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.311318 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cf679c8-cd9a-42aa-9743-d6cd07283f03-host\") pod \"crc-debug-w2cmp\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.311404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cf679c8-cd9a-42aa-9743-d6cd07283f03-host\") pod \"crc-debug-w2cmp\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.311489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxh8k\" (UniqueName: \"kubernetes.io/projected/0cf679c8-cd9a-42aa-9743-d6cd07283f03-kube-api-access-hxh8k\") pod \"crc-debug-w2cmp\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.330729 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxh8k\" (UniqueName: \"kubernetes.io/projected/0cf679c8-cd9a-42aa-9743-d6cd07283f03-kube-api-access-hxh8k\") pod \"crc-debug-w2cmp\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.412586 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:06 crc kubenswrapper[4796]: W1212 05:57:06.441072 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cf679c8_cd9a_42aa_9743_d6cd07283f03.slice/crio-d90ede67353f778dcc50ad41dbfd5924ee36aa8e588dadba88061cbdbe274208 WatchSource:0}: Error finding container d90ede67353f778dcc50ad41dbfd5924ee36aa8e588dadba88061cbdbe274208: Status 404 returned error can't find the container with id d90ede67353f778dcc50ad41dbfd5924ee36aa8e588dadba88061cbdbe274208 Dec 12 05:57:06 crc kubenswrapper[4796]: I1212 05:57:06.486855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-w2cmp" event={"ID":"0cf679c8-cd9a-42aa-9743-d6cd07283f03","Type":"ContainerStarted","Data":"d90ede67353f778dcc50ad41dbfd5924ee36aa8e588dadba88061cbdbe274208"} Dec 12 05:57:07 crc kubenswrapper[4796]: I1212 05:57:07.497333 4796 generic.go:334] "Generic (PLEG): container finished" podID="0cf679c8-cd9a-42aa-9743-d6cd07283f03" containerID="e2762b6af8f14d3f813bcfd00e659dafca9c1d4bb1a0c7624f7f009522d31e18" exitCode=0 Dec 12 05:57:07 crc kubenswrapper[4796]: I1212 05:57:07.497754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/crc-debug-w2cmp" event={"ID":"0cf679c8-cd9a-42aa-9743-d6cd07283f03","Type":"ContainerDied","Data":"e2762b6af8f14d3f813bcfd00e659dafca9c1d4bb1a0c7624f7f009522d31e18"} Dec 12 05:57:07 crc kubenswrapper[4796]: I1212 05:57:07.542792 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9xz97/crc-debug-w2cmp"] Dec 12 05:57:07 crc kubenswrapper[4796]: I1212 05:57:07.558272 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9xz97/crc-debug-w2cmp"] Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.623945 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.813132 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cf679c8-cd9a-42aa-9743-d6cd07283f03-host\") pod \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.813212 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxh8k\" (UniqueName: \"kubernetes.io/projected/0cf679c8-cd9a-42aa-9743-d6cd07283f03-kube-api-access-hxh8k\") pod \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\" (UID: \"0cf679c8-cd9a-42aa-9743-d6cd07283f03\") " Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.813312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cf679c8-cd9a-42aa-9743-d6cd07283f03-host" (OuterVolumeSpecName: "host") pod "0cf679c8-cd9a-42aa-9743-d6cd07283f03" (UID: "0cf679c8-cd9a-42aa-9743-d6cd07283f03"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.813850 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0cf679c8-cd9a-42aa-9743-d6cd07283f03-host\") on node \"crc\" DevicePath \"\"" Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.819212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf679c8-cd9a-42aa-9743-d6cd07283f03-kube-api-access-hxh8k" (OuterVolumeSpecName: "kube-api-access-hxh8k") pod "0cf679c8-cd9a-42aa-9743-d6cd07283f03" (UID: "0cf679c8-cd9a-42aa-9743-d6cd07283f03"). InnerVolumeSpecName "kube-api-access-hxh8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:57:08 crc kubenswrapper[4796]: I1212 05:57:08.915265 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxh8k\" (UniqueName: \"kubernetes.io/projected/0cf679c8-cd9a-42aa-9743-d6cd07283f03-kube-api-access-hxh8k\") on node \"crc\" DevicePath \"\"" Dec 12 05:57:09 crc kubenswrapper[4796]: I1212 05:57:09.429771 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf679c8-cd9a-42aa-9743-d6cd07283f03" path="/var/lib/kubelet/pods/0cf679c8-cd9a-42aa-9743-d6cd07283f03/volumes" Dec 12 05:57:09 crc kubenswrapper[4796]: I1212 05:57:09.521697 4796 scope.go:117] "RemoveContainer" containerID="e2762b6af8f14d3f813bcfd00e659dafca9c1d4bb1a0c7624f7f009522d31e18" Dec 12 05:57:09 crc kubenswrapper[4796]: I1212 05:57:09.522236 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/crc-debug-w2cmp" Dec 12 05:57:32 crc kubenswrapper[4796]: I1212 05:57:32.969971 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:57:32 crc kubenswrapper[4796]: I1212 05:57:32.971407 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:57:43 crc kubenswrapper[4796]: I1212 05:57:43.856227 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c7c5bcf6-phtcl_1c5fd6e6-e8f6-46da-81fa-5ae035fbc255/barbican-api/0.log" Dec 12 05:57:43 crc kubenswrapper[4796]: I1212 05:57:43.999824 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c7c5bcf6-phtcl_1c5fd6e6-e8f6-46da-81fa-5ae035fbc255/barbican-api-log/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.122704 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8586b565b6-wdsdw_1d1a0aba-21c7-4f4f-95f8-41802b2d23c3/barbican-keystone-listener-log/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.138419 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8586b565b6-wdsdw_1d1a0aba-21c7-4f4f-95f8-41802b2d23c3/barbican-keystone-listener/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.259355 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56fbf7b8ff-h4cs5_92265cff-6059-4736-a5cb-8935972c0bb8/barbican-worker/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.391496 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56fbf7b8ff-h4cs5_92265cff-6059-4736-a5cb-8935972c0bb8/barbican-worker-log/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.502933 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-85jr6_86779d4a-5602-4b32-8e50-cd72fac17e8a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.648000 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/ceilometer-central-agent/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.716842 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/ceilometer-notification-agent/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.800435 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/proxy-httpd/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.814116 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9f1d6ee-b301-4827-9a5b-8a98d43319bc/sg-core/0.log" Dec 12 05:57:44 crc kubenswrapper[4796]: I1212 05:57:44.973544 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b2d0ca4f-8c51-492b-ae06-3d09ecdc4934/cinder-api/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.123692 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b2d0ca4f-8c51-492b-ae06-3d09ecdc4934/cinder-api-log/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.253430 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ec3a4988-59e7-443a-bbf1-31cd16abdcd6/cinder-scheduler/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.357225 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ec3a4988-59e7-443a-bbf1-31cd16abdcd6/probe/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.452327 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-22ksj_906f3822-cad4-497a-a87e-d50a257f3b15/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.648482 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6qncr_33889558-2c62-4dcd-ba10-c98855839d1e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.723945 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d47554775-pbd74_ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7/init/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.902499 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d47554775-pbd74_ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7/init/0.log" Dec 12 05:57:45 crc kubenswrapper[4796]: I1212 05:57:45.950815 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ckkts_45182716-6fae-4d42-81e2-ccdea8bf145b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.128118 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d47554775-pbd74_ebb0fa33-9ec8-4b2c-b0bd-72ae81c220e7/dnsmasq-dns/0.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.201372 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4cd46cb5-ba6c-480f-a039-95a66caa648a/glance-httpd/0.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.258164 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4cd46cb5-ba6c-480f-a039-95a66caa648a/glance-log/0.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.444153 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b88340e6-0adf-40e5-9e91-610c949cd71b/glance-log/0.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.449155 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b88340e6-0adf-40e5-9e91-610c949cd71b/glance-httpd/0.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.644485 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb55bccb4-z8p6q_7913672c-384c-472c-89a8-0d546f345a28/horizon/3.log" Dec 12 05:57:46 crc kubenswrapper[4796]: I1212 05:57:46.761548 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb55bccb4-z8p6q_7913672c-384c-472c-89a8-0d546f345a28/horizon/2.log" Dec 12 05:57:47 crc kubenswrapper[4796]: I1212 05:57:47.016724 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2zkc6_f181d2cb-61a4-4328-88b8-18dd8cd24228/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:47 crc kubenswrapper[4796]: I1212 05:57:47.201936 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb55bccb4-z8p6q_7913672c-384c-472c-89a8-0d546f345a28/horizon-log/0.log" Dec 12 05:57:47 crc kubenswrapper[4796]: I1212 05:57:47.269171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4tkgt_b92a97d2-b9e1-4717-a79c-a085aaaed3b6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:47 crc kubenswrapper[4796]: I1212 05:57:47.712627 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29425261-srdtz_7246a8f1-e2cd-4daa-b4dc-551fdaebcdf9/keystone-cron/0.log" Dec 12 05:57:48 crc kubenswrapper[4796]: I1212 05:57:48.186407 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-86bc7ff485-lzxvk_b621dfe8-e202-40a6-8544-9195e0d7dc80/keystone-api/0.log" Dec 12 05:57:48 crc kubenswrapper[4796]: I1212 05:57:48.350983 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_416b9e99-eb64-4c24-9c32-0fb5bc210a2a/kube-state-metrics/0.log" Dec 12 05:57:48 crc kubenswrapper[4796]: I1212 05:57:48.521427 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zxbf4_61d49bcc-8f04-4fc8-8f61-70e5cc450c5a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:48 crc kubenswrapper[4796]: I1212 05:57:48.985751 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cnp4s_177571dd-6d0b-463d-8831-2983eb8a331d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:49 crc kubenswrapper[4796]: I1212 05:57:49.115657 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c45989d6c-2r8mn_c2090789-6394-4377-8d8c-4c37cd7bd857/neutron-httpd/0.log" Dec 12 05:57:49 crc kubenswrapper[4796]: I1212 05:57:49.241703 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c45989d6c-2r8mn_c2090789-6394-4377-8d8c-4c37cd7bd857/neutron-api/0.log" Dec 12 05:57:50 crc kubenswrapper[4796]: I1212 05:57:50.234014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cd3c1c91-f0c1-4dd0-b23e-227f1353858a/nova-cell0-conductor-conductor/0.log" Dec 12 05:57:50 crc kubenswrapper[4796]: I1212 05:57:50.673215 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e91e5c81-6ced-4f8f-b7ba-c40f35e989ca/nova-cell1-conductor-conductor/0.log" Dec 12 05:57:50 crc kubenswrapper[4796]: I1212 05:57:50.941731 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cb994df9-2eed-4089-9770-ccb138bf3c80/nova-cell1-novncproxy-novncproxy/0.log" Dec 12 05:57:51 crc kubenswrapper[4796]: I1212 05:57:51.008803 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f70e0dab-4b7c-4e6f-b28e-76e72492ca1d/nova-api-api/0.log" Dec 12 05:57:51 crc kubenswrapper[4796]: I1212 05:57:51.037850 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f70e0dab-4b7c-4e6f-b28e-76e72492ca1d/nova-api-log/0.log" Dec 12 05:57:51 crc kubenswrapper[4796]: I1212 05:57:51.277016 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rl279_a4dc653f-0e4f-4c95-a71a-c96d4419f484/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:51 crc kubenswrapper[4796]: I1212 05:57:51.318390 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_556757b9-1c0e-4bc0-8a0f-81a77ab8705b/nova-metadata-log/0.log" Dec 12 05:57:51 crc kubenswrapper[4796]: I1212 05:57:51.817654 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3338cb28-50b7-41c6-af36-ec2fb86fb949/mysql-bootstrap/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.106667 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3338cb28-50b7-41c6-af36-ec2fb86fb949/mysql-bootstrap/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.143061 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3338cb28-50b7-41c6-af36-ec2fb86fb949/galera/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.179479 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_df191340-1fad-4c88-b12c-a4af0fc96924/nova-scheduler-scheduler/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.482832 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4b59263e-1bd8-4661-b612-2f4bc4f611f1/mysql-bootstrap/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.662877 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4b59263e-1bd8-4661-b612-2f4bc4f611f1/galera/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.674007 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4b59263e-1bd8-4661-b612-2f4bc4f611f1/mysql-bootstrap/0.log" Dec 12 05:57:52 crc kubenswrapper[4796]: I1212 05:57:52.884531 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9826fd92-e55e-487f-ac6a-73a3e7f4d88a/openstackclient/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.041479 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g7lfn_2dc1f12e-5104-4f56-ae2a-da52e2f60434/openstack-network-exporter/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.266171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ml9sj_0751eb6e-3452-4b8d-abfa-d37121e1a03e/ovn-controller/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.454839 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_556757b9-1c0e-4bc0-8a0f-81a77ab8705b/nova-metadata-metadata/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.506104 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovsdb-server-init/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.758777 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovsdb-server/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.791628 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovsdb-server-init/0.log" Dec 12 05:57:53 crc kubenswrapper[4796]: I1212 05:57:53.826065 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xcn6_1f98d057-864c-464b-91e7-85c6462f8afb/ovs-vswitchd/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.042367 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tkc6x_60d6d74d-f5f7-43c4-8462-f073926de480/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.110891 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3ad884c-e210-4b14-b98b-19d888c3886d/openstack-network-exporter/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.189526 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f3ad884c-e210-4b14-b98b-19d888c3886d/ovn-northd/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.361197 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7de6f8df-6271-4b09-94c5-642c37337fcf/openstack-network-exporter/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.448374 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7de6f8df-6271-4b09-94c5-642c37337fcf/ovsdbserver-nb/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.728972 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5a25b030-6ebc-4ac2-8114-f24663c7a815/openstack-network-exporter/0.log" Dec 12 05:57:54 crc kubenswrapper[4796]: I1212 05:57:54.750520 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5a25b030-6ebc-4ac2-8114-f24663c7a815/ovsdbserver-sb/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.035094 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8ffdd64b-7gmkf_724e3890-930d-4492-8599-460add96a852/placement-api/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.090824 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d31d0723-e71d-4ec0-89e8-645a248d9add/setup-container/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.257555 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8ffdd64b-7gmkf_724e3890-930d-4492-8599-460add96a852/placement-log/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.446595 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d31d0723-e71d-4ec0-89e8-645a248d9add/setup-container/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.596551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d31d0723-e71d-4ec0-89e8-645a248d9add/rabbitmq/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.625305 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c4628c3c-0ba5-4dcd-b4a9-003b5dc95119/setup-container/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.728529 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c4628c3c-0ba5-4dcd-b4a9-003b5dc95119/setup-container/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.872073 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c4628c3c-0ba5-4dcd-b4a9-003b5dc95119/rabbitmq/0.log" Dec 12 05:57:55 crc kubenswrapper[4796]: I1212 05:57:55.902930 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-c8x8l_e3c26ddb-8907-4b44-bc42-86138dc25d8b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:56 crc kubenswrapper[4796]: I1212 05:57:56.565729 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-hhdhc_47c5ed15-7a61-4101-b8f4-470f53ef2a10/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:56 crc kubenswrapper[4796]: I1212 05:57:56.598414 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gxtv7_ebb00117-c00a-49db-aeea-bcff226d7283/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:56 crc kubenswrapper[4796]: I1212 05:57:56.812449 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4m89q_925383e5-f552-447e-a749-b0337865ce48/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:56 crc kubenswrapper[4796]: I1212 05:57:56.862807 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-f95w9_d1178cb0-94fb-46a2-84b8-a67ed7e55856/ssh-known-hosts-edpm-deployment/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.178164 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58987c9f79-c2xlb_80ea0a4a-0715-4d5b-be0c-e11f00e6d743/proxy-server/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.259959 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x6qjf_db25cf2d-5a36-4289-b5d2-3a156acaee44/swift-ring-rebalance/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.355433 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58987c9f79-c2xlb_80ea0a4a-0715-4d5b-be0c-e11f00e6d743/proxy-httpd/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.569489 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-auditor/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.572751 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-reaper/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.706690 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-replicator/0.log" Dec 12 05:57:57 crc kubenswrapper[4796]: I1212 05:57:57.771337 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/account-server/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.204186 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-auditor/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.252099 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-replicator/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.290765 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-server/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.294874 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/container-updater/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.591403 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-replicator/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.603591 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-server/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.624737 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-auditor/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.650355 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-expirer/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.969524 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/object-updater/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.978548 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/rsync/0.log" Dec 12 05:57:58 crc kubenswrapper[4796]: I1212 05:57:58.999049 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_09626c7b-0eba-4fe5-9598-ac562516cb98/swift-recon-cron/0.log" Dec 12 05:57:59 crc kubenswrapper[4796]: I1212 05:57:59.265162 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-phst6_e3e68ee3-5e0b-4748-a03f-9c4d226b690c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:57:59 crc kubenswrapper[4796]: I1212 05:57:59.279426 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5e0cb8e6-8e94-4b4f-9e83-7092dd02dadd/tempest-tests-tempest-tests-runner/0.log" Dec 12 05:57:59 crc kubenswrapper[4796]: I1212 05:57:59.834252 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e58842bb-2205-41c3-90d5-1d87ec35baf5/test-operator-logs-container/0.log" Dec 12 05:57:59 crc kubenswrapper[4796]: I1212 05:57:59.847888 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7j5hg_165cb754-40d9-41ec-abd3-1f5fbaeeb13c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 12 05:58:02 crc kubenswrapper[4796]: I1212 05:58:02.969533 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 05:58:02 crc kubenswrapper[4796]: I1212 05:58:02.970128 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 05:58:02 crc kubenswrapper[4796]: I1212 05:58:02.970200 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 05:58:02 crc kubenswrapper[4796]: I1212 05:58:02.971013 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6b764c514fecf5f6c2879d304e8e05a77be00b027192f025422fedf7b566b48"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 05:58:02 crc kubenswrapper[4796]: I1212 05:58:02.971069 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://a6b764c514fecf5f6c2879d304e8e05a77be00b027192f025422fedf7b566b48" gracePeriod=600 Dec 12 05:58:04 crc kubenswrapper[4796]: I1212 05:58:04.105677 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="a6b764c514fecf5f6c2879d304e8e05a77be00b027192f025422fedf7b566b48" exitCode=0 Dec 12 05:58:04 crc kubenswrapper[4796]: I1212 05:58:04.105727 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"a6b764c514fecf5f6c2879d304e8e05a77be00b027192f025422fedf7b566b48"} Dec 12 05:58:04 crc kubenswrapper[4796]: I1212 05:58:04.106968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd"} Dec 12 05:58:04 crc kubenswrapper[4796]: I1212 05:58:04.107055 4796 scope.go:117] "RemoveContainer" containerID="1f9f89c3c44e960ed55e38ce29df56986849f5ad1b7d21227064b4ea07ce978b" Dec 12 05:58:14 crc kubenswrapper[4796]: I1212 05:58:14.685062 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_497a4966-f578-46a2-a33c-c3288f96f7f1/memcached/0.log" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.808554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmtpp"] Dec 12 05:58:25 crc kubenswrapper[4796]: E1212 05:58:25.810342 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf679c8-cd9a-42aa-9743-d6cd07283f03" containerName="container-00" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.810368 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf679c8-cd9a-42aa-9743-d6cd07283f03" containerName="container-00" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.810678 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf679c8-cd9a-42aa-9743-d6cd07283f03" containerName="container-00" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.812029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.822545 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmtpp"] Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.933475 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7ltx\" (UniqueName: \"kubernetes.io/projected/02f56478-89d2-42ad-857a-23f121c48cb2-kube-api-access-w7ltx\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.933557 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-catalog-content\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:25 crc kubenswrapper[4796]: I1212 05:58:25.933621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-utilities\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.035139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-utilities\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.035575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7ltx\" (UniqueName: \"kubernetes.io/projected/02f56478-89d2-42ad-857a-23f121c48cb2-kube-api-access-w7ltx\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.035676 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-utilities\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.035786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-catalog-content\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.036086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-catalog-content\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.397390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7ltx\" (UniqueName: \"kubernetes.io/projected/02f56478-89d2-42ad-857a-23f121c48cb2-kube-api-access-w7ltx\") pod \"redhat-operators-lmtpp\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.431155 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:26 crc kubenswrapper[4796]: I1212 05:58:26.967906 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmtpp"] Dec 12 05:58:26 crc kubenswrapper[4796]: W1212 05:58:26.994657 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f56478_89d2_42ad_857a_23f121c48cb2.slice/crio-c6e2ca03800d81ae900cd1473719ac0a221746f0775d5edb65be58517f9c2f3b WatchSource:0}: Error finding container c6e2ca03800d81ae900cd1473719ac0a221746f0775d5edb65be58517f9c2f3b: Status 404 returned error can't find the container with id c6e2ca03800d81ae900cd1473719ac0a221746f0775d5edb65be58517f9c2f3b Dec 12 05:58:27 crc kubenswrapper[4796]: I1212 05:58:27.316709 4796 generic.go:334] "Generic (PLEG): container finished" podID="02f56478-89d2-42ad-857a-23f121c48cb2" containerID="d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b" exitCode=0 Dec 12 05:58:27 crc kubenswrapper[4796]: I1212 05:58:27.316843 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerDied","Data":"d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b"} Dec 12 05:58:27 crc kubenswrapper[4796]: I1212 05:58:27.317034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerStarted","Data":"c6e2ca03800d81ae900cd1473719ac0a221746f0775d5edb65be58517f9c2f3b"} Dec 12 05:58:27 crc kubenswrapper[4796]: I1212 05:58:27.318627 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 05:58:28 crc kubenswrapper[4796]: I1212 05:58:28.328397 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerStarted","Data":"d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee"} Dec 12 05:58:31 crc kubenswrapper[4796]: I1212 05:58:31.357007 4796 generic.go:334] "Generic (PLEG): container finished" podID="02f56478-89d2-42ad-857a-23f121c48cb2" containerID="d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee" exitCode=0 Dec 12 05:58:31 crc kubenswrapper[4796]: I1212 05:58:31.357043 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerDied","Data":"d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee"} Dec 12 05:58:32 crc kubenswrapper[4796]: I1212 05:58:32.385682 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerStarted","Data":"122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5"} Dec 12 05:58:32 crc kubenswrapper[4796]: I1212 05:58:32.408088 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmtpp" podStartSLOduration=2.824420902 podStartE2EDuration="7.408066332s" podCreationTimestamp="2025-12-12 05:58:25 +0000 UTC" firstStartedPulling="2025-12-12 05:58:27.318434809 +0000 UTC m=+5098.194451946" lastFinishedPulling="2025-12-12 05:58:31.902080229 +0000 UTC m=+5102.778097376" observedRunningTime="2025-12-12 05:58:32.40576211 +0000 UTC m=+5103.281779257" watchObservedRunningTime="2025-12-12 05:58:32.408066332 +0000 UTC m=+5103.284083479" Dec 12 05:58:34 crc kubenswrapper[4796]: I1212 05:58:34.630675 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/util/0.log" Dec 12 05:58:34 crc kubenswrapper[4796]: I1212 05:58:34.841747 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/pull/0.log" Dec 12 05:58:34 crc kubenswrapper[4796]: I1212 05:58:34.897673 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/util/0.log" Dec 12 05:58:34 crc kubenswrapper[4796]: I1212 05:58:34.945568 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/pull/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.190106 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/pull/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.214945 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/util/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.229236 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a9e31712e850cea4433d4f37012249ba0523fc7015fd35129b99450d2059w22_ef694040-71d2-464d-b70b-15b0ab44a2d8/extract/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.458780 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-f9h97_f4b37e55-be7c-467b-9739-e82c28f1916e/kube-rbac-proxy/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.543020 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-f9h97_f4b37e55-be7c-467b-9739-e82c28f1916e/manager/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.615961 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-27f9h_19b30665-06c6-48e5-8ec7-3eeaf3d3e72e/kube-rbac-proxy/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.808245 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-27f9h_19b30665-06c6-48e5-8ec7-3eeaf3d3e72e/manager/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.893025 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-qzrqh_565c4c89-1b44-462b-8307-15d3d0a6cf1f/kube-rbac-proxy/0.log" Dec 12 05:58:35 crc kubenswrapper[4796]: I1212 05:58:35.974683 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-qzrqh_565c4c89-1b44-462b-8307-15d3d0a6cf1f/manager/0.log" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.316458 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-mdmcv_22df48e7-88f5-43df-bdce-9116599bea1b/kube-rbac-proxy/0.log" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.431333 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.431574 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.459876 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-mdmcv_22df48e7-88f5-43df-bdce-9116599bea1b/manager/0.log" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.592422 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jqd2f_3092bc98-4221-47ff-bae0-06efcfa85522/kube-rbac-proxy/0.log" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.625226 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jqd2f_3092bc98-4221-47ff-bae0-06efcfa85522/manager/0.log" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.760672 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-5v7h7_035421c3-b1dd-48de-a195-04bfef7c5a0e/kube-rbac-proxy/0.log" Dec 12 05:58:36 crc kubenswrapper[4796]: I1212 05:58:36.894732 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-5v7h7_035421c3-b1dd-48de-a195-04bfef7c5a0e/manager/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.070784 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lzzhj_301fd006-5a61-46bd-b19f-bbd1ba8f7baf/kube-rbac-proxy/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.258782 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-w5fjz_c14d829a-f63e-404c-b117-65c0e15280e8/kube-rbac-proxy/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.279885 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lzzhj_301fd006-5a61-46bd-b19f-bbd1ba8f7baf/manager/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.350538 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-w5fjz_c14d829a-f63e-404c-b117-65c0e15280e8/manager/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.478350 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmtpp" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="registry-server" probeResult="failure" output=< Dec 12 05:58:37 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 05:58:37 crc kubenswrapper[4796]: > Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.843081 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-47x2m_9fb465c9-338c-4755-ba24-b7985e57fa06/kube-rbac-proxy/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.852203 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-6dsjv_b47de1f3-3223-47bb-a707-72ee23490049/kube-rbac-proxy/0.log" Dec 12 05:58:37 crc kubenswrapper[4796]: I1212 05:58:37.906350 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-6dsjv_b47de1f3-3223-47bb-a707-72ee23490049/manager/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.149305 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-47x2m_9fb465c9-338c-4755-ba24-b7985e57fa06/manager/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.162804 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-ws54v_43ac4ab3-1f18-4b18-8a83-1561837988eb/kube-rbac-proxy/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.227997 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-ws54v_43ac4ab3-1f18-4b18-8a83-1561837988eb/manager/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.398651 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-2dggq_8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73/kube-rbac-proxy/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.452622 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-2dggq_8d4e9d7a-4caa-46c1-9e85-e0a9ff867f73/manager/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.697404 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-n6m6j_a0340c55-5a39-4841-a602-694ef484e3ec/kube-rbac-proxy/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.830551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-kgq7g_6a645239-185e-4bfb-b8a8-9c442ae1c379/kube-rbac-proxy/0.log" Dec 12 05:58:38 crc kubenswrapper[4796]: I1212 05:58:38.905155 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-n6m6j_a0340c55-5a39-4841-a602-694ef484e3ec/manager/0.log" Dec 12 05:58:39 crc kubenswrapper[4796]: I1212 05:58:39.018459 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-kgq7g_6a645239-185e-4bfb-b8a8-9c442ae1c379/manager/0.log" Dec 12 05:58:39 crc kubenswrapper[4796]: I1212 05:58:39.242606 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8bscf_bb156fa4-57d0-457f-be10-e9c013f37a84/kube-rbac-proxy/0.log" Dec 12 05:58:39 crc kubenswrapper[4796]: I1212 05:58:39.310533 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8bscf_bb156fa4-57d0-457f-be10-e9c013f37a84/manager/0.log" Dec 12 05:58:39 crc kubenswrapper[4796]: I1212 05:58:39.819905 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8jkpm_e1ffcbd5-a171-4663-b7c1-dd12e4ac1b31/registry-server/0.log" Dec 12 05:58:39 crc kubenswrapper[4796]: I1212 05:58:39.844449 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d67f9f647-pf79m_2f68b517-0566-4a78-92bd-215a5b6e304b/operator/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.047487 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wdgv6_c3102315-cf09-47e4-b1b2-4721b38ac5b8/kube-rbac-proxy/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.188209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wdgv6_c3102315-cf09-47e4-b1b2-4721b38ac5b8/manager/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.299251 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8k4ws_7f217e33-5880-42b4-931f-8a4633195ffc/kube-rbac-proxy/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.449316 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8k4ws_7f217e33-5880-42b4-931f-8a4633195ffc/manager/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.501761 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7775c45dbc-9fh7g_f2d005ee-450b-4029-bb3c-a5b389edc347/manager/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.567443 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jz9xq_0d5457f7-3a7d-4a0e-a733-33c78860c9b5/operator/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.687344 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-rpkcq_252d73ba-87e9-492d-a9c4-2f8e4e8d66fa/kube-rbac-proxy/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.729799 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-rpkcq_252d73ba-87e9-492d-a9c4-2f8e4e8d66fa/manager/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.792556 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-gvgzg_38f86aeb-2024-40b1-8c60-25c2c78ef7ac/kube-rbac-proxy/0.log" Dec 12 05:58:40 crc kubenswrapper[4796]: I1212 05:58:40.899764 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-gvgzg_38f86aeb-2024-40b1-8c60-25c2c78ef7ac/manager/0.log" Dec 12 05:58:41 crc kubenswrapper[4796]: I1212 05:58:41.031488 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-jh5td_bbeb9b29-5dc1-4cdf-94de-397cdb4a32de/kube-rbac-proxy/0.log" Dec 12 05:58:41 crc kubenswrapper[4796]: I1212 05:58:41.089264 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-jh5td_bbeb9b29-5dc1-4cdf-94de-397cdb4a32de/manager/0.log" Dec 12 05:58:41 crc kubenswrapper[4796]: I1212 05:58:41.162938 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-mt59j_cd307815-1f04-446d-a89b-60fa6574f0db/kube-rbac-proxy/0.log" Dec 12 05:58:41 crc kubenswrapper[4796]: I1212 05:58:41.283645 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-mt59j_cd307815-1f04-446d-a89b-60fa6574f0db/manager/0.log" Dec 12 05:58:47 crc kubenswrapper[4796]: I1212 05:58:47.483647 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmtpp" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="registry-server" probeResult="failure" output=< Dec 12 05:58:47 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 12 05:58:47 crc kubenswrapper[4796]: > Dec 12 05:58:56 crc kubenswrapper[4796]: I1212 05:58:56.483734 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:56 crc kubenswrapper[4796]: I1212 05:58:56.551592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:57 crc kubenswrapper[4796]: I1212 05:58:57.014100 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmtpp"] Dec 12 05:58:57 crc kubenswrapper[4796]: I1212 05:58:57.605784 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmtpp" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="registry-server" containerID="cri-o://122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5" gracePeriod=2 Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.224038 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.360941 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7ltx\" (UniqueName: \"kubernetes.io/projected/02f56478-89d2-42ad-857a-23f121c48cb2-kube-api-access-w7ltx\") pod \"02f56478-89d2-42ad-857a-23f121c48cb2\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.361086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-catalog-content\") pod \"02f56478-89d2-42ad-857a-23f121c48cb2\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.361163 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-utilities\") pod \"02f56478-89d2-42ad-857a-23f121c48cb2\" (UID: \"02f56478-89d2-42ad-857a-23f121c48cb2\") " Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.362041 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-utilities" (OuterVolumeSpecName: "utilities") pod "02f56478-89d2-42ad-857a-23f121c48cb2" (UID: "02f56478-89d2-42ad-857a-23f121c48cb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.379162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f56478-89d2-42ad-857a-23f121c48cb2-kube-api-access-w7ltx" (OuterVolumeSpecName: "kube-api-access-w7ltx") pod "02f56478-89d2-42ad-857a-23f121c48cb2" (UID: "02f56478-89d2-42ad-857a-23f121c48cb2"). InnerVolumeSpecName "kube-api-access-w7ltx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.455384 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02f56478-89d2-42ad-857a-23f121c48cb2" (UID: "02f56478-89d2-42ad-857a-23f121c48cb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.463753 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.463786 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f56478-89d2-42ad-857a-23f121c48cb2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.463796 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7ltx\" (UniqueName: \"kubernetes.io/projected/02f56478-89d2-42ad-857a-23f121c48cb2-kube-api-access-w7ltx\") on node \"crc\" DevicePath \"\"" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.617692 4796 generic.go:334] "Generic (PLEG): container finished" podID="02f56478-89d2-42ad-857a-23f121c48cb2" containerID="122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5" exitCode=0 Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.617750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerDied","Data":"122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5"} Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.617784 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmtpp" event={"ID":"02f56478-89d2-42ad-857a-23f121c48cb2","Type":"ContainerDied","Data":"c6e2ca03800d81ae900cd1473719ac0a221746f0775d5edb65be58517f9c2f3b"} Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.617804 4796 scope.go:117] "RemoveContainer" containerID="122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.617752 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmtpp" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.647754 4796 scope.go:117] "RemoveContainer" containerID="d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.674600 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmtpp"] Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.685942 4796 scope.go:117] "RemoveContainer" containerID="d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.691563 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmtpp"] Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.710515 4796 scope.go:117] "RemoveContainer" containerID="122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5" Dec 12 05:58:58 crc kubenswrapper[4796]: E1212 05:58:58.710884 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5\": container with ID starting with 122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5 not found: ID does not exist" containerID="122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.710916 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5"} err="failed to get container status \"122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5\": rpc error: code = NotFound desc = could not find container \"122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5\": container with ID starting with 122a3c91610c286f6f20120a8f50fc4cc04c5b5fdadc748e7e53d9a83ba7d8b5 not found: ID does not exist" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.710936 4796 scope.go:117] "RemoveContainer" containerID="d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee" Dec 12 05:58:58 crc kubenswrapper[4796]: E1212 05:58:58.711213 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee\": container with ID starting with d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee not found: ID does not exist" containerID="d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.711253 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee"} err="failed to get container status \"d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee\": rpc error: code = NotFound desc = could not find container \"d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee\": container with ID starting with d7c89bb9e53a1783729752a8e1357ddabe7b3506cdfac64051bbccdbb054acee not found: ID does not exist" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.711303 4796 scope.go:117] "RemoveContainer" containerID="d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b" Dec 12 05:58:58 crc kubenswrapper[4796]: E1212 05:58:58.711644 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b\": container with ID starting with d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b not found: ID does not exist" containerID="d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b" Dec 12 05:58:58 crc kubenswrapper[4796]: I1212 05:58:58.711666 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b"} err="failed to get container status \"d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b\": rpc error: code = NotFound desc = could not find container \"d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b\": container with ID starting with d4f683a3a841942810195c012b3c747858979f5ca7d0984f9e0930e0ca9d292b not found: ID does not exist" Dec 12 05:58:59 crc kubenswrapper[4796]: I1212 05:58:59.424908 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" path="/var/lib/kubelet/pods/02f56478-89d2-42ad-857a-23f121c48cb2/volumes" Dec 12 05:59:05 crc kubenswrapper[4796]: I1212 05:59:05.151783 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n86m7_89c07828-a1a4-4261-b744-fec105f01000/control-plane-machine-set-operator/0.log" Dec 12 05:59:05 crc kubenswrapper[4796]: I1212 05:59:05.433241 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-45hnd_8e05fbfb-ba4c-465c-94a2-49f666f39c02/machine-api-operator/0.log" Dec 12 05:59:05 crc kubenswrapper[4796]: I1212 05:59:05.455343 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-45hnd_8e05fbfb-ba4c-465c-94a2-49f666f39c02/kube-rbac-proxy/0.log" Dec 12 05:59:19 crc kubenswrapper[4796]: I1212 05:59:19.218947 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ww44f_8860987b-111c-4cd3-b138-4cce9dce0ad8/cert-manager-controller/0.log" Dec 12 05:59:19 crc kubenswrapper[4796]: I1212 05:59:19.494580 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-lzcb5_5702c38e-0a66-415d-ba4a-6a32f7dbbc70/cert-manager-webhook/0.log" Dec 12 05:59:19 crc kubenswrapper[4796]: I1212 05:59:19.528076 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-k98g8_80433c18-6202-449a-8982-1d738afc9e14/cert-manager-cainjector/0.log" Dec 12 05:59:32 crc kubenswrapper[4796]: I1212 05:59:32.888464 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-jzd58_f1e50c4c-9467-4663-a305-6077b4dc5b1d/nmstate-console-plugin/0.log" Dec 12 05:59:33 crc kubenswrapper[4796]: I1212 05:59:33.078205 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lc8l9_284c3da0-54ab-47f6-960d-063c58c0f870/nmstate-handler/0.log" Dec 12 05:59:33 crc kubenswrapper[4796]: I1212 05:59:33.214067 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-qrqkv_c918c9c3-c2d3-415d-9942-16385200a014/kube-rbac-proxy/0.log" Dec 12 05:59:33 crc kubenswrapper[4796]: I1212 05:59:33.268342 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-qrqkv_c918c9c3-c2d3-415d-9942-16385200a014/nmstate-metrics/0.log" Dec 12 05:59:33 crc kubenswrapper[4796]: I1212 05:59:33.371793 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-w2zw4_3637ba14-6803-4897-9b95-09119916eaa5/nmstate-operator/0.log" Dec 12 05:59:33 crc kubenswrapper[4796]: I1212 05:59:33.602546 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-bd782_3d70cee4-8e4a-49fe-a0c1-e26a7452ba32/nmstate-webhook/0.log" Dec 12 05:59:47 crc kubenswrapper[4796]: I1212 05:59:47.987166 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vm69t_dfbd0368-d699-416a-bf10-c6e5a6716c1a/kube-rbac-proxy/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.083936 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-vm69t_dfbd0368-d699-416a-bf10-c6e5a6716c1a/controller/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.176476 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.341832 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.377386 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.438493 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.438647 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.545824 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.580243 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.681797 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:59:48 crc kubenswrapper[4796]: I1212 05:59:48.828209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.082092 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-reloader/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.112037 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-frr-files/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.136654 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/cp-metrics/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.154925 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/controller/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.357884 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/frr-metrics/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.372674 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/kube-rbac-proxy-frr/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.442141 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/kube-rbac-proxy/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.602116 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/reloader/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.796658 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-fpns2_7585dd28-35f6-4a54-b39c-9bdbecf98c13/frr-k8s-webhook-server/0.log" Dec 12 05:59:49 crc kubenswrapper[4796]: I1212 05:59:49.910741 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54b76c8dd-989lk_db1474b8-5eda-4d9e-8364-21082cc5d214/manager/0.log" Dec 12 05:59:50 crc kubenswrapper[4796]: I1212 05:59:50.172261 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c7ffbcf95-nsmtv_9d65581e-d568-49dc-9be0-4e4f06ce52e4/webhook-server/0.log" Dec 12 05:59:50 crc kubenswrapper[4796]: I1212 05:59:50.444837 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nvw2w_379b9266-5fd0-4c73-8d8e-376e85112dbd/kube-rbac-proxy/0.log" Dec 12 05:59:50 crc kubenswrapper[4796]: I1212 05:59:50.881882 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nvw2w_379b9266-5fd0-4c73-8d8e-376e85112dbd/speaker/0.log" Dec 12 05:59:51 crc kubenswrapper[4796]: I1212 05:59:51.042221 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s6n9k_a0877598-c08d-4323-9919-775d9d1f789d/frr/0.log" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.560572 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qp8q"] Dec 12 05:59:58 crc kubenswrapper[4796]: E1212 05:59:58.561976 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="extract-utilities" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.562000 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="extract-utilities" Dec 12 05:59:58 crc kubenswrapper[4796]: E1212 05:59:58.562042 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="registry-server" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.562052 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="registry-server" Dec 12 05:59:58 crc kubenswrapper[4796]: E1212 05:59:58.562088 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="extract-content" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.562097 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="extract-content" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.562349 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f56478-89d2-42ad-857a-23f121c48cb2" containerName="registry-server" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.564013 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.571297 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qp8q"] Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.719036 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-catalog-content\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.719439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45qmw\" (UniqueName: \"kubernetes.io/projected/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-kube-api-access-45qmw\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.719678 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-utilities\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.821455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45qmw\" (UniqueName: \"kubernetes.io/projected/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-kube-api-access-45qmw\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.821645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-utilities\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.821718 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-catalog-content\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.822194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-utilities\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.822310 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-catalog-content\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.842258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45qmw\" (UniqueName: \"kubernetes.io/projected/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-kube-api-access-45qmw\") pod \"community-operators-6qp8q\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:58 crc kubenswrapper[4796]: I1212 05:59:58.887076 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 05:59:59 crc kubenswrapper[4796]: I1212 05:59:59.573506 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qp8q"] Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.150431 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8"] Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.152563 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.155399 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.155522 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.167426 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8"] Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.194885 4796 generic.go:334] "Generic (PLEG): container finished" podID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerID="e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9" exitCode=0 Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.194942 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerDied","Data":"e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9"} Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.194966 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerStarted","Data":"a4d814df40534f6f3cbe9be54471d5ec955de946a445c9cc9f8ce828fb00c121"} Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.275137 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823e4547-eda5-45ca-be14-8c3c1a05d7d1-secret-volume\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.275473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqwq7\" (UniqueName: \"kubernetes.io/projected/823e4547-eda5-45ca-be14-8c3c1a05d7d1-kube-api-access-rqwq7\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.275556 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823e4547-eda5-45ca-be14-8c3c1a05d7d1-config-volume\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.377359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823e4547-eda5-45ca-be14-8c3c1a05d7d1-secret-volume\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.377711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqwq7\" (UniqueName: \"kubernetes.io/projected/823e4547-eda5-45ca-be14-8c3c1a05d7d1-kube-api-access-rqwq7\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.377794 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823e4547-eda5-45ca-be14-8c3c1a05d7d1-config-volume\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.378629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823e4547-eda5-45ca-be14-8c3c1a05d7d1-config-volume\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.390016 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823e4547-eda5-45ca-be14-8c3c1a05d7d1-secret-volume\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.397302 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqwq7\" (UniqueName: \"kubernetes.io/projected/823e4547-eda5-45ca-be14-8c3c1a05d7d1-kube-api-access-rqwq7\") pod \"collect-profiles-29425320-vl4q8\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.471466 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:00 crc kubenswrapper[4796]: W1212 06:00:00.944443 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod823e4547_eda5_45ca_be14_8c3c1a05d7d1.slice/crio-5795385b723dcf40ea499cbc6f5f759cb2a202d985294404e6f8f5e28eb176b5 WatchSource:0}: Error finding container 5795385b723dcf40ea499cbc6f5f759cb2a202d985294404e6f8f5e28eb176b5: Status 404 returned error can't find the container with id 5795385b723dcf40ea499cbc6f5f759cb2a202d985294404e6f8f5e28eb176b5 Dec 12 06:00:00 crc kubenswrapper[4796]: I1212 06:00:00.961089 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8"] Dec 12 06:00:01 crc kubenswrapper[4796]: I1212 06:00:01.204945 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" event={"ID":"823e4547-eda5-45ca-be14-8c3c1a05d7d1","Type":"ContainerStarted","Data":"2e874d269ac34269ae189fc4507484512aaaa478057f956cd898aa436662445f"} Dec 12 06:00:01 crc kubenswrapper[4796]: I1212 06:00:01.205343 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" event={"ID":"823e4547-eda5-45ca-be14-8c3c1a05d7d1","Type":"ContainerStarted","Data":"5795385b723dcf40ea499cbc6f5f759cb2a202d985294404e6f8f5e28eb176b5"} Dec 12 06:00:01 crc kubenswrapper[4796]: I1212 06:00:01.224074 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" podStartSLOduration=1.224056422 podStartE2EDuration="1.224056422s" podCreationTimestamp="2025-12-12 06:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 06:00:01.221211563 +0000 UTC m=+5192.097228710" watchObservedRunningTime="2025-12-12 06:00:01.224056422 +0000 UTC m=+5192.100073569" Dec 12 06:00:02 crc kubenswrapper[4796]: I1212 06:00:02.214007 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerStarted","Data":"cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83"} Dec 12 06:00:02 crc kubenswrapper[4796]: I1212 06:00:02.215387 4796 generic.go:334] "Generic (PLEG): container finished" podID="823e4547-eda5-45ca-be14-8c3c1a05d7d1" containerID="2e874d269ac34269ae189fc4507484512aaaa478057f956cd898aa436662445f" exitCode=0 Dec 12 06:00:02 crc kubenswrapper[4796]: I1212 06:00:02.215429 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" event={"ID":"823e4547-eda5-45ca-be14-8c3c1a05d7d1","Type":"ContainerDied","Data":"2e874d269ac34269ae189fc4507484512aaaa478057f956cd898aa436662445f"} Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.228627 4796 generic.go:334] "Generic (PLEG): container finished" podID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerID="cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83" exitCode=0 Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.228657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerDied","Data":"cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83"} Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.581226 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.747240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823e4547-eda5-45ca-be14-8c3c1a05d7d1-secret-volume\") pod \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.747380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqwq7\" (UniqueName: \"kubernetes.io/projected/823e4547-eda5-45ca-be14-8c3c1a05d7d1-kube-api-access-rqwq7\") pod \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.747443 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823e4547-eda5-45ca-be14-8c3c1a05d7d1-config-volume\") pod \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\" (UID: \"823e4547-eda5-45ca-be14-8c3c1a05d7d1\") " Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.748402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/823e4547-eda5-45ca-be14-8c3c1a05d7d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "823e4547-eda5-45ca-be14-8c3c1a05d7d1" (UID: "823e4547-eda5-45ca-be14-8c3c1a05d7d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.766752 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823e4547-eda5-45ca-be14-8c3c1a05d7d1-kube-api-access-rqwq7" (OuterVolumeSpecName: "kube-api-access-rqwq7") pod "823e4547-eda5-45ca-be14-8c3c1a05d7d1" (UID: "823e4547-eda5-45ca-be14-8c3c1a05d7d1"). InnerVolumeSpecName "kube-api-access-rqwq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.773403 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823e4547-eda5-45ca-be14-8c3c1a05d7d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "823e4547-eda5-45ca-be14-8c3c1a05d7d1" (UID: "823e4547-eda5-45ca-be14-8c3c1a05d7d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.849877 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqwq7\" (UniqueName: \"kubernetes.io/projected/823e4547-eda5-45ca-be14-8c3c1a05d7d1-kube-api-access-rqwq7\") on node \"crc\" DevicePath \"\"" Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.849910 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823e4547-eda5-45ca-be14-8c3c1a05d7d1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 06:00:03 crc kubenswrapper[4796]: I1212 06:00:03.849919 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823e4547-eda5-45ca-be14-8c3c1a05d7d1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.209761 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/util/0.log" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.238716 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerStarted","Data":"6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573"} Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.241501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" event={"ID":"823e4547-eda5-45ca-be14-8c3c1a05d7d1","Type":"ContainerDied","Data":"5795385b723dcf40ea499cbc6f5f759cb2a202d985294404e6f8f5e28eb176b5"} Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.241533 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5795385b723dcf40ea499cbc6f5f759cb2a202d985294404e6f8f5e28eb176b5" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.241575 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425320-vl4q8" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.265670 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qp8q" podStartSLOduration=2.7439495000000003 podStartE2EDuration="6.265643137s" podCreationTimestamp="2025-12-12 05:59:58 +0000 UTC" firstStartedPulling="2025-12-12 06:00:00.197235638 +0000 UTC m=+5191.073252795" lastFinishedPulling="2025-12-12 06:00:03.718929285 +0000 UTC m=+5194.594946432" observedRunningTime="2025-12-12 06:00:04.260830706 +0000 UTC m=+5195.136847853" watchObservedRunningTime="2025-12-12 06:00:04.265643137 +0000 UTC m=+5195.141660284" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.317136 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd"] Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.326561 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425275-gtrtd"] Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.534060 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/util/0.log" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.571007 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/pull/0.log" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.571008 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/pull/0.log" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.765170 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/util/0.log" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.813588 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/pull/0.log" Dec 12 06:00:04 crc kubenswrapper[4796]: I1212 06:00:04.882907 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46vpwf_52b4af6b-5192-4766-8b22-d099bb744669/extract/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.000812 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/util/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.207578 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/pull/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.244696 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/pull/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.257478 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/util/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.423922 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2" path="/var/lib/kubelet/pods/747ffd9e-08a7-4cdc-a04c-3e8f90d45ce2/volumes" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.560418 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/pull/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.561166 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/util/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.561446 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8p99kw_1fff455f-8742-424b-96a4-32f9ddac34f7/extract/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.788248 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-utilities/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.940962 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-content/0.log" Dec 12 06:00:05 crc kubenswrapper[4796]: I1212 06:00:05.978096 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-utilities/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.005688 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-content/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.151251 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-content/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.205003 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/extract-utilities/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.410027 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-utilities/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.816900 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lg6bx_dcf179a6-7e07-4022-a119-a9b97737e0db/registry-server/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.819229 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-content/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.821557 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-content/0.log" Dec 12 06:00:06 crc kubenswrapper[4796]: I1212 06:00:06.836100 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-utilities/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.050352 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-utilities/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.103917 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/extract-content/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.424734 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/extract-utilities/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.682752 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/extract-utilities/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.751162 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/extract-content/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.788318 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/extract-content/0.log" Dec 12 06:00:07 crc kubenswrapper[4796]: I1212 06:00:07.850271 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5bbk7_bcb78ed9-fb89-45b1-abc2-4ac316d2b5cb/registry-server/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.037989 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/extract-utilities/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.038436 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/registry-server/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.078908 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qp8q_8990c995-3aac-414d-97fe-8bfdcbe9ac9d/extract-content/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.246248 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sng6c_05678928-a7d3-4250-8454-abadf034f217/marketplace-operator/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.327353 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-utilities/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.543371 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-content/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.549027 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-content/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.549176 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-utilities/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.884739 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-utilities/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.888131 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.889162 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.953651 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.996591 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-utilities/0.log" Dec 12 06:00:08 crc kubenswrapper[4796]: I1212 06:00:08.998517 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/extract-content/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.065787 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z9w78_21ae0f48-17be-4f69-a5c0-bf9c72205b24/registry-server/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.270402 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-utilities/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.271407 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-content/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.285925 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-content/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.323932 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.470428 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-utilities/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.483343 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/extract-content/0.log" Dec 12 06:00:09 crc kubenswrapper[4796]: I1212 06:00:09.547414 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qp8q"] Dec 12 06:00:10 crc kubenswrapper[4796]: I1212 06:00:10.115748 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2krf_d1335c04-c002-4da2-af48-7b5cd6910c27/registry-server/0.log" Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.299576 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6qp8q" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="registry-server" containerID="cri-o://6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573" gracePeriod=2 Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.743132 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.917836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45qmw\" (UniqueName: \"kubernetes.io/projected/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-kube-api-access-45qmw\") pod \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.918071 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-catalog-content\") pod \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.918191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-utilities\") pod \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\" (UID: \"8990c995-3aac-414d-97fe-8bfdcbe9ac9d\") " Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.918701 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-utilities" (OuterVolumeSpecName: "utilities") pod "8990c995-3aac-414d-97fe-8bfdcbe9ac9d" (UID: "8990c995-3aac-414d-97fe-8bfdcbe9ac9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.919111 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.929143 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-kube-api-access-45qmw" (OuterVolumeSpecName: "kube-api-access-45qmw") pod "8990c995-3aac-414d-97fe-8bfdcbe9ac9d" (UID: "8990c995-3aac-414d-97fe-8bfdcbe9ac9d"). InnerVolumeSpecName "kube-api-access-45qmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 06:00:11 crc kubenswrapper[4796]: I1212 06:00:11.967455 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8990c995-3aac-414d-97fe-8bfdcbe9ac9d" (UID: "8990c995-3aac-414d-97fe-8bfdcbe9ac9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.020630 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.020663 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45qmw\" (UniqueName: \"kubernetes.io/projected/8990c995-3aac-414d-97fe-8bfdcbe9ac9d-kube-api-access-45qmw\") on node \"crc\" DevicePath \"\"" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.309434 4796 generic.go:334] "Generic (PLEG): container finished" podID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerID="6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573" exitCode=0 Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.309502 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qp8q" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.309502 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerDied","Data":"6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573"} Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.309592 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qp8q" event={"ID":"8990c995-3aac-414d-97fe-8bfdcbe9ac9d","Type":"ContainerDied","Data":"a4d814df40534f6f3cbe9be54471d5ec955de946a445c9cc9f8ce828fb00c121"} Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.309622 4796 scope.go:117] "RemoveContainer" containerID="6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.335538 4796 scope.go:117] "RemoveContainer" containerID="cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.362628 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qp8q"] Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.367926 4796 scope.go:117] "RemoveContainer" containerID="e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.375852 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6qp8q"] Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.431880 4796 scope.go:117] "RemoveContainer" containerID="6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573" Dec 12 06:00:12 crc kubenswrapper[4796]: E1212 06:00:12.432499 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573\": container with ID starting with 6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573 not found: ID does not exist" containerID="6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.432524 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573"} err="failed to get container status \"6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573\": rpc error: code = NotFound desc = could not find container \"6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573\": container with ID starting with 6bb86880519ec5ac0179168f8dba52d2c60cfd967584121cd44f5f7955c62573 not found: ID does not exist" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.432544 4796 scope.go:117] "RemoveContainer" containerID="cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83" Dec 12 06:00:12 crc kubenswrapper[4796]: E1212 06:00:12.433031 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83\": container with ID starting with cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83 not found: ID does not exist" containerID="cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.433052 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83"} err="failed to get container status \"cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83\": rpc error: code = NotFound desc = could not find container \"cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83\": container with ID starting with cb481331e7c321a175b23dd302f060bd1881e3f55ca30c172d950663ef3edc83 not found: ID does not exist" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.433066 4796 scope.go:117] "RemoveContainer" containerID="e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9" Dec 12 06:00:12 crc kubenswrapper[4796]: E1212 06:00:12.433613 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9\": container with ID starting with e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9 not found: ID does not exist" containerID="e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9" Dec 12 06:00:12 crc kubenswrapper[4796]: I1212 06:00:12.433631 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9"} err="failed to get container status \"e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9\": rpc error: code = NotFound desc = could not find container \"e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9\": container with ID starting with e516e3d6a88c5a458fd6afe7c8b6ac5da811e18d3a632c6694b26668a79b2fa9 not found: ID does not exist" Dec 12 06:00:13 crc kubenswrapper[4796]: I1212 06:00:13.421341 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" path="/var/lib/kubelet/pods/8990c995-3aac-414d-97fe-8bfdcbe9ac9d/volumes" Dec 12 06:00:32 crc kubenswrapper[4796]: I1212 06:00:32.969263 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 06:00:32 crc kubenswrapper[4796]: I1212 06:00:32.969722 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.154215 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29425321-bb96l"] Dec 12 06:01:00 crc kubenswrapper[4796]: E1212 06:01:00.155174 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="registry-server" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.155201 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="registry-server" Dec 12 06:01:00 crc kubenswrapper[4796]: E1212 06:01:00.155243 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="extract-content" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.155251 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="extract-content" Dec 12 06:01:00 crc kubenswrapper[4796]: E1212 06:01:00.155267 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="extract-utilities" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.155306 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="extract-utilities" Dec 12 06:01:00 crc kubenswrapper[4796]: E1212 06:01:00.155321 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823e4547-eda5-45ca-be14-8c3c1a05d7d1" containerName="collect-profiles" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.155328 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="823e4547-eda5-45ca-be14-8c3c1a05d7d1" containerName="collect-profiles" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.155570 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="823e4547-eda5-45ca-be14-8c3c1a05d7d1" containerName="collect-profiles" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.155597 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8990c995-3aac-414d-97fe-8bfdcbe9ac9d" containerName="registry-server" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.156293 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.178515 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29425321-bb96l"] Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.244149 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-combined-ca-bundle\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.244316 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-config-data\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.244567 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-fernet-keys\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.244648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwhpf\" (UniqueName: \"kubernetes.io/projected/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-kube-api-access-lwhpf\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.345618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-fernet-keys\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.345707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwhpf\" (UniqueName: \"kubernetes.io/projected/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-kube-api-access-lwhpf\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.345781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-combined-ca-bundle\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.345832 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-config-data\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.354503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-fernet-keys\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.354649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-config-data\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.356139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-combined-ca-bundle\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.365688 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwhpf\" (UniqueName: \"kubernetes.io/projected/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-kube-api-access-lwhpf\") pod \"keystone-cron-29425321-bb96l\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:00 crc kubenswrapper[4796]: I1212 06:01:00.515141 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:01 crc kubenswrapper[4796]: I1212 06:01:01.028014 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29425321-bb96l"] Dec 12 06:01:01 crc kubenswrapper[4796]: I1212 06:01:01.782328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425321-bb96l" event={"ID":"3416962f-9d6f-4394-ae3c-8b6fe5df23d5","Type":"ContainerStarted","Data":"fb7f87ce58116d72dfd5513bd103c9870eda88cb41723b771ac91c8a80280d86"} Dec 12 06:01:01 crc kubenswrapper[4796]: I1212 06:01:01.782376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425321-bb96l" event={"ID":"3416962f-9d6f-4394-ae3c-8b6fe5df23d5","Type":"ContainerStarted","Data":"d122690f56ebbb6ef37b635c74c351bd38d791c8985760c33028e64be8cfffdb"} Dec 12 06:01:01 crc kubenswrapper[4796]: I1212 06:01:01.809345 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29425321-bb96l" podStartSLOduration=1.80932325 podStartE2EDuration="1.80932325s" podCreationTimestamp="2025-12-12 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 06:01:01.806271364 +0000 UTC m=+5252.682288531" watchObservedRunningTime="2025-12-12 06:01:01.80932325 +0000 UTC m=+5252.685340417" Dec 12 06:01:02 crc kubenswrapper[4796]: I1212 06:01:02.969892 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 06:01:02 crc kubenswrapper[4796]: I1212 06:01:02.970274 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 06:01:04 crc kubenswrapper[4796]: I1212 06:01:04.828674 4796 generic.go:334] "Generic (PLEG): container finished" podID="3416962f-9d6f-4394-ae3c-8b6fe5df23d5" containerID="fb7f87ce58116d72dfd5513bd103c9870eda88cb41723b771ac91c8a80280d86" exitCode=0 Dec 12 06:01:04 crc kubenswrapper[4796]: I1212 06:01:04.828723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425321-bb96l" event={"ID":"3416962f-9d6f-4394-ae3c-8b6fe5df23d5","Type":"ContainerDied","Data":"fb7f87ce58116d72dfd5513bd103c9870eda88cb41723b771ac91c8a80280d86"} Dec 12 06:01:04 crc kubenswrapper[4796]: I1212 06:01:04.898392 4796 scope.go:117] "RemoveContainer" containerID="87c1bb5837742563e17809c79e9f3ab1db9b40a97239867f95257fbe00b817cd" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.220178 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.364742 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-config-data\") pod \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.365213 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-combined-ca-bundle\") pod \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.365248 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwhpf\" (UniqueName: \"kubernetes.io/projected/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-kube-api-access-lwhpf\") pod \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.365334 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-fernet-keys\") pod \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\" (UID: \"3416962f-9d6f-4394-ae3c-8b6fe5df23d5\") " Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.370605 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-kube-api-access-lwhpf" (OuterVolumeSpecName: "kube-api-access-lwhpf") pod "3416962f-9d6f-4394-ae3c-8b6fe5df23d5" (UID: "3416962f-9d6f-4394-ae3c-8b6fe5df23d5"). InnerVolumeSpecName "kube-api-access-lwhpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.371900 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3416962f-9d6f-4394-ae3c-8b6fe5df23d5" (UID: "3416962f-9d6f-4394-ae3c-8b6fe5df23d5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.406656 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3416962f-9d6f-4394-ae3c-8b6fe5df23d5" (UID: "3416962f-9d6f-4394-ae3c-8b6fe5df23d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.434402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-config-data" (OuterVolumeSpecName: "config-data") pod "3416962f-9d6f-4394-ae3c-8b6fe5df23d5" (UID: "3416962f-9d6f-4394-ae3c-8b6fe5df23d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.467043 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.467090 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwhpf\" (UniqueName: \"kubernetes.io/projected/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-kube-api-access-lwhpf\") on node \"crc\" DevicePath \"\"" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.467107 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.467118 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3416962f-9d6f-4394-ae3c-8b6fe5df23d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.847960 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29425321-bb96l" event={"ID":"3416962f-9d6f-4394-ae3c-8b6fe5df23d5","Type":"ContainerDied","Data":"d122690f56ebbb6ef37b635c74c351bd38d791c8985760c33028e64be8cfffdb"} Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.848222 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d122690f56ebbb6ef37b635c74c351bd38d791c8985760c33028e64be8cfffdb" Dec 12 06:01:06 crc kubenswrapper[4796]: I1212 06:01:06.848009 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29425321-bb96l" Dec 12 06:01:32 crc kubenswrapper[4796]: I1212 06:01:32.970231 4796 patch_prober.go:28] interesting pod/machine-config-daemon-mxh7m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 06:01:32 crc kubenswrapper[4796]: I1212 06:01:32.970804 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 06:01:32 crc kubenswrapper[4796]: I1212 06:01:32.971814 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" Dec 12 06:01:32 crc kubenswrapper[4796]: I1212 06:01:32.972637 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd"} pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 06:01:32 crc kubenswrapper[4796]: I1212 06:01:32.972711 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerName="machine-config-daemon" containerID="cri-o://e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" gracePeriod=600 Dec 12 06:01:33 crc kubenswrapper[4796]: E1212 06:01:33.108725 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:01:33 crc kubenswrapper[4796]: I1212 06:01:33.129314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerDied","Data":"e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd"} Dec 12 06:01:33 crc kubenswrapper[4796]: I1212 06:01:33.129273 4796 generic.go:334] "Generic (PLEG): container finished" podID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" exitCode=0 Dec 12 06:01:33 crc kubenswrapper[4796]: I1212 06:01:33.129383 4796 scope.go:117] "RemoveContainer" containerID="a6b764c514fecf5f6c2879d304e8e05a77be00b027192f025422fedf7b566b48" Dec 12 06:01:33 crc kubenswrapper[4796]: I1212 06:01:33.129966 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:01:33 crc kubenswrapper[4796]: E1212 06:01:33.130206 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:01:44 crc kubenswrapper[4796]: I1212 06:01:44.411938 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:01:44 crc kubenswrapper[4796]: E1212 06:01:44.413724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:01:55 crc kubenswrapper[4796]: I1212 06:01:55.416216 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:01:55 crc kubenswrapper[4796]: E1212 06:01:55.418110 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:02:09 crc kubenswrapper[4796]: I1212 06:02:09.418663 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:02:09 crc kubenswrapper[4796]: E1212 06:02:09.419203 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:02:20 crc kubenswrapper[4796]: I1212 06:02:20.411162 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:02:20 crc kubenswrapper[4796]: E1212 06:02:20.412062 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:02:25 crc kubenswrapper[4796]: I1212 06:02:25.669873 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerID="475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b" exitCode=0 Dec 12 06:02:25 crc kubenswrapper[4796]: I1212 06:02:25.669977 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9xz97/must-gather-gj8nj" event={"ID":"bcb0f8fd-59e3-4053-8f8d-6a30256e1491","Type":"ContainerDied","Data":"475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b"} Dec 12 06:02:25 crc kubenswrapper[4796]: I1212 06:02:25.671674 4796 scope.go:117] "RemoveContainer" containerID="475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b" Dec 12 06:02:26 crc kubenswrapper[4796]: I1212 06:02:26.658186 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9xz97_must-gather-gj8nj_bcb0f8fd-59e3-4053-8f8d-6a30256e1491/gather/0.log" Dec 12 06:02:31 crc kubenswrapper[4796]: I1212 06:02:31.411368 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:02:31 crc kubenswrapper[4796]: E1212 06:02:31.411937 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:02:39 crc kubenswrapper[4796]: I1212 06:02:39.856707 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9xz97/must-gather-gj8nj"] Dec 12 06:02:39 crc kubenswrapper[4796]: I1212 06:02:39.857481 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9xz97/must-gather-gj8nj" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="copy" containerID="cri-o://cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6" gracePeriod=2 Dec 12 06:02:39 crc kubenswrapper[4796]: I1212 06:02:39.868600 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9xz97/must-gather-gj8nj"] Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.533318 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9xz97_must-gather-gj8nj_bcb0f8fd-59e3-4053-8f8d-6a30256e1491/copy/0.log" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.533855 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.692972 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlsnv\" (UniqueName: \"kubernetes.io/projected/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-kube-api-access-vlsnv\") pod \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.693531 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-must-gather-output\") pod \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\" (UID: \"bcb0f8fd-59e3-4053-8f8d-6a30256e1491\") " Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.707567 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-kube-api-access-vlsnv" (OuterVolumeSpecName: "kube-api-access-vlsnv") pod "bcb0f8fd-59e3-4053-8f8d-6a30256e1491" (UID: "bcb0f8fd-59e3-4053-8f8d-6a30256e1491"). InnerVolumeSpecName "kube-api-access-vlsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.796354 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlsnv\" (UniqueName: \"kubernetes.io/projected/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-kube-api-access-vlsnv\") on node \"crc\" DevicePath \"\"" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.804637 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9xz97_must-gather-gj8nj_bcb0f8fd-59e3-4053-8f8d-6a30256e1491/copy/0.log" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.804910 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerID="cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6" exitCode=143 Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.804971 4796 scope.go:117] "RemoveContainer" containerID="cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.805094 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9xz97/must-gather-gj8nj" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.829976 4796 scope.go:117] "RemoveContainer" containerID="475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.892864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bcb0f8fd-59e3-4053-8f8d-6a30256e1491" (UID: "bcb0f8fd-59e3-4053-8f8d-6a30256e1491"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.899172 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcb0f8fd-59e3-4053-8f8d-6a30256e1491-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.906063 4796 scope.go:117] "RemoveContainer" containerID="cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6" Dec 12 06:02:40 crc kubenswrapper[4796]: E1212 06:02:40.910483 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6\": container with ID starting with cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6 not found: ID does not exist" containerID="cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.910542 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6"} err="failed to get container status \"cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6\": rpc error: code = NotFound desc = could not find container \"cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6\": container with ID starting with cb4e606cdbd974fda3b0361275aead78f863f23cbd1b57d924b2db8b3576bcf6 not found: ID does not exist" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.910575 4796 scope.go:117] "RemoveContainer" containerID="475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b" Dec 12 06:02:40 crc kubenswrapper[4796]: E1212 06:02:40.910933 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b\": container with ID starting with 475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b not found: ID does not exist" containerID="475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b" Dec 12 06:02:40 crc kubenswrapper[4796]: I1212 06:02:40.910970 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b"} err="failed to get container status \"475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b\": rpc error: code = NotFound desc = could not find container \"475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b\": container with ID starting with 475e09034e377e88c91b26904761a74e812657424cb6a59a462564a1e6399d7b not found: ID does not exist" Dec 12 06:02:41 crc kubenswrapper[4796]: I1212 06:02:41.421012 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" path="/var/lib/kubelet/pods/bcb0f8fd-59e3-4053-8f8d-6a30256e1491/volumes" Dec 12 06:02:45 crc kubenswrapper[4796]: I1212 06:02:45.410923 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:02:45 crc kubenswrapper[4796]: E1212 06:02:45.411346 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:02:57 crc kubenswrapper[4796]: I1212 06:02:57.412866 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:02:57 crc kubenswrapper[4796]: E1212 06:02:57.413730 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:03:10 crc kubenswrapper[4796]: I1212 06:03:10.411655 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:03:10 crc kubenswrapper[4796]: E1212 06:03:10.412217 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:03:24 crc kubenswrapper[4796]: I1212 06:03:24.411227 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:03:24 crc kubenswrapper[4796]: E1212 06:03:24.412431 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:03:37 crc kubenswrapper[4796]: I1212 06:03:37.412134 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:03:37 crc kubenswrapper[4796]: E1212 06:03:37.412867 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:03:48 crc kubenswrapper[4796]: I1212 06:03:48.411419 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:03:48 crc kubenswrapper[4796]: E1212 06:03:48.412087 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.878173 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fz64t"] Dec 12 06:03:49 crc kubenswrapper[4796]: E1212 06:03:49.879005 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="copy" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.879024 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="copy" Dec 12 06:03:49 crc kubenswrapper[4796]: E1212 06:03:49.879041 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416962f-9d6f-4394-ae3c-8b6fe5df23d5" containerName="keystone-cron" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.879048 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416962f-9d6f-4394-ae3c-8b6fe5df23d5" containerName="keystone-cron" Dec 12 06:03:49 crc kubenswrapper[4796]: E1212 06:03:49.879085 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="gather" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.879093 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="gather" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.882965 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="copy" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.883017 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb0f8fd-59e3-4053-8f8d-6a30256e1491" containerName="gather" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.883037 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3416962f-9d6f-4394-ae3c-8b6fe5df23d5" containerName="keystone-cron" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.884925 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:49 crc kubenswrapper[4796]: I1212 06:03:49.894413 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz64t"] Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.044672 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrb4\" (UniqueName: \"kubernetes.io/projected/58c521b0-95a1-4dc4-8411-f18459dcb3a7-kube-api-access-sfrb4\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.044825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-catalog-content\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.044923 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-utilities\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.147196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrb4\" (UniqueName: \"kubernetes.io/projected/58c521b0-95a1-4dc4-8411-f18459dcb3a7-kube-api-access-sfrb4\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.147295 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-catalog-content\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.147338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-utilities\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.147839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-catalog-content\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.147861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-utilities\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.167960 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrb4\" (UniqueName: \"kubernetes.io/projected/58c521b0-95a1-4dc4-8411-f18459dcb3a7-kube-api-access-sfrb4\") pod \"redhat-marketplace-fz64t\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.203150 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.515420 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz64t"] Dec 12 06:03:50 crc kubenswrapper[4796]: I1212 06:03:50.594465 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerStarted","Data":"cc618699b95ce2e433a4daefd3d7ab1993361f1cf2701a4122e706d06fc75c7b"} Dec 12 06:03:51 crc kubenswrapper[4796]: I1212 06:03:51.603783 4796 generic.go:334] "Generic (PLEG): container finished" podID="58c521b0-95a1-4dc4-8411-f18459dcb3a7" containerID="5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5" exitCode=0 Dec 12 06:03:51 crc kubenswrapper[4796]: I1212 06:03:51.604028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerDied","Data":"5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5"} Dec 12 06:03:51 crc kubenswrapper[4796]: I1212 06:03:51.606620 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 06:03:52 crc kubenswrapper[4796]: I1212 06:03:52.617381 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerStarted","Data":"1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb"} Dec 12 06:03:53 crc kubenswrapper[4796]: I1212 06:03:53.636158 4796 generic.go:334] "Generic (PLEG): container finished" podID="58c521b0-95a1-4dc4-8411-f18459dcb3a7" containerID="1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb" exitCode=0 Dec 12 06:03:53 crc kubenswrapper[4796]: I1212 06:03:53.636363 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerDied","Data":"1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb"} Dec 12 06:03:54 crc kubenswrapper[4796]: I1212 06:03:54.648066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerStarted","Data":"e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114"} Dec 12 06:03:54 crc kubenswrapper[4796]: I1212 06:03:54.676842 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fz64t" podStartSLOduration=2.907943989 podStartE2EDuration="5.676822106s" podCreationTimestamp="2025-12-12 06:03:49 +0000 UTC" firstStartedPulling="2025-12-12 06:03:51.606240723 +0000 UTC m=+5422.482257880" lastFinishedPulling="2025-12-12 06:03:54.37511884 +0000 UTC m=+5425.251135997" observedRunningTime="2025-12-12 06:03:54.665639645 +0000 UTC m=+5425.541656802" watchObservedRunningTime="2025-12-12 06:03:54.676822106 +0000 UTC m=+5425.552839263" Dec 12 06:04:00 crc kubenswrapper[4796]: I1212 06:04:00.203478 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:04:00 crc kubenswrapper[4796]: I1212 06:04:00.203868 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:04:00 crc kubenswrapper[4796]: I1212 06:04:00.271959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:04:00 crc kubenswrapper[4796]: I1212 06:04:00.776064 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:04:00 crc kubenswrapper[4796]: I1212 06:04:00.831522 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz64t"] Dec 12 06:04:01 crc kubenswrapper[4796]: I1212 06:04:01.412294 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:04:01 crc kubenswrapper[4796]: E1212 06:04:01.412652 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:04:02 crc kubenswrapper[4796]: I1212 06:04:02.726002 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fz64t" podUID="58c521b0-95a1-4dc4-8411-f18459dcb3a7" containerName="registry-server" containerID="cri-o://e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114" gracePeriod=2 Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.138242 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.194797 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrb4\" (UniqueName: \"kubernetes.io/projected/58c521b0-95a1-4dc4-8411-f18459dcb3a7-kube-api-access-sfrb4\") pod \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.194924 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-utilities\") pod \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.195051 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-catalog-content\") pod \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\" (UID: \"58c521b0-95a1-4dc4-8411-f18459dcb3a7\") " Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.195850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-utilities" (OuterVolumeSpecName: "utilities") pod "58c521b0-95a1-4dc4-8411-f18459dcb3a7" (UID: "58c521b0-95a1-4dc4-8411-f18459dcb3a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.205647 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c521b0-95a1-4dc4-8411-f18459dcb3a7-kube-api-access-sfrb4" (OuterVolumeSpecName: "kube-api-access-sfrb4") pod "58c521b0-95a1-4dc4-8411-f18459dcb3a7" (UID: "58c521b0-95a1-4dc4-8411-f18459dcb3a7"). InnerVolumeSpecName "kube-api-access-sfrb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.218453 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58c521b0-95a1-4dc4-8411-f18459dcb3a7" (UID: "58c521b0-95a1-4dc4-8411-f18459dcb3a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.296835 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.296874 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c521b0-95a1-4dc4-8411-f18459dcb3a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.296886 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfrb4\" (UniqueName: \"kubernetes.io/projected/58c521b0-95a1-4dc4-8411-f18459dcb3a7-kube-api-access-sfrb4\") on node \"crc\" DevicePath \"\"" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.736188 4796 generic.go:334] "Generic (PLEG): container finished" podID="58c521b0-95a1-4dc4-8411-f18459dcb3a7" containerID="e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114" exitCode=0 Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.736242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerDied","Data":"e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114"} Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.736456 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz64t" event={"ID":"58c521b0-95a1-4dc4-8411-f18459dcb3a7","Type":"ContainerDied","Data":"cc618699b95ce2e433a4daefd3d7ab1993361f1cf2701a4122e706d06fc75c7b"} Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.736479 4796 scope.go:117] "RemoveContainer" containerID="e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.736263 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz64t" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.759941 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz64t"] Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.768594 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz64t"] Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.769562 4796 scope.go:117] "RemoveContainer" containerID="1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.791731 4796 scope.go:117] "RemoveContainer" containerID="5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.840262 4796 scope.go:117] "RemoveContainer" containerID="e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114" Dec 12 06:04:03 crc kubenswrapper[4796]: E1212 06:04:03.840781 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114\": container with ID starting with e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114 not found: ID does not exist" containerID="e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.840817 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114"} err="failed to get container status \"e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114\": rpc error: code = NotFound desc = could not find container \"e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114\": container with ID starting with e1eff9e54db1f1f93e7b6aa8b66ab6e3aaad6bf5a5f3b80c5ea4d6585ce8c114 not found: ID does not exist" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.840837 4796 scope.go:117] "RemoveContainer" containerID="1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb" Dec 12 06:04:03 crc kubenswrapper[4796]: E1212 06:04:03.841180 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb\": container with ID starting with 1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb not found: ID does not exist" containerID="1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.841201 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb"} err="failed to get container status \"1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb\": rpc error: code = NotFound desc = could not find container \"1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb\": container with ID starting with 1eae547926c35185a69e9a08ab220b0a9a4ab8a0d884edc9eee1006b1b06f1bb not found: ID does not exist" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.841214 4796 scope.go:117] "RemoveContainer" containerID="5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5" Dec 12 06:04:03 crc kubenswrapper[4796]: E1212 06:04:03.841440 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5\": container with ID starting with 5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5 not found: ID does not exist" containerID="5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5" Dec 12 06:04:03 crc kubenswrapper[4796]: I1212 06:04:03.841470 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5"} err="failed to get container status \"5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5\": rpc error: code = NotFound desc = could not find container \"5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5\": container with ID starting with 5dd27dc2a85d0cf8ce210ca4dfb4d8e595cc7af5c7590c392f9d33e5ef172ab5 not found: ID does not exist" Dec 12 06:04:05 crc kubenswrapper[4796]: I1212 06:04:05.426267 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c521b0-95a1-4dc4-8411-f18459dcb3a7" path="/var/lib/kubelet/pods/58c521b0-95a1-4dc4-8411-f18459dcb3a7/volumes" Dec 12 06:04:12 crc kubenswrapper[4796]: I1212 06:04:12.411535 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:04:12 crc kubenswrapper[4796]: E1212 06:04:12.413607 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:04:26 crc kubenswrapper[4796]: I1212 06:04:26.410983 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:04:26 crc kubenswrapper[4796]: E1212 06:04:26.411724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:04:38 crc kubenswrapper[4796]: I1212 06:04:38.412192 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:04:38 crc kubenswrapper[4796]: E1212 06:04:38.413208 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:04:53 crc kubenswrapper[4796]: I1212 06:04:53.411540 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:04:53 crc kubenswrapper[4796]: E1212 06:04:53.413599 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:05:07 crc kubenswrapper[4796]: I1212 06:05:07.411837 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:05:07 crc kubenswrapper[4796]: E1212 06:05:07.412842 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:05:20 crc kubenswrapper[4796]: I1212 06:05:20.411863 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:05:20 crc kubenswrapper[4796]: E1212 06:05:20.412936 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:05:33 crc kubenswrapper[4796]: I1212 06:05:33.411797 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:05:33 crc kubenswrapper[4796]: E1212 06:05:33.412651 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:05:48 crc kubenswrapper[4796]: I1212 06:05:48.412103 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:05:48 crc kubenswrapper[4796]: E1212 06:05:48.413798 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:06:02 crc kubenswrapper[4796]: I1212 06:06:02.411174 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:06:02 crc kubenswrapper[4796]: E1212 06:06:02.411882 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:06:13 crc kubenswrapper[4796]: I1212 06:06:13.411590 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:06:13 crc kubenswrapper[4796]: E1212 06:06:13.413750 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:06:24 crc kubenswrapper[4796]: I1212 06:06:24.411390 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:06:24 crc kubenswrapper[4796]: E1212 06:06:24.413182 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxh7m_openshift-machine-config-operator(0403e92c-3d00-4092-a6d0-cdbc36b3ec1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" podUID="0403e92c-3d00-4092-a6d0-cdbc36b3ec1c" Dec 12 06:06:39 crc kubenswrapper[4796]: I1212 06:06:39.417758 4796 scope.go:117] "RemoveContainer" containerID="e9c837ca3c0cda461e0911b98aa17670ae9a5e8e703a575d0e0b218c73d18acd" Dec 12 06:06:40 crc kubenswrapper[4796]: I1212 06:06:40.226012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxh7m" event={"ID":"0403e92c-3d00-4092-a6d0-cdbc36b3ec1c","Type":"ContainerStarted","Data":"67ebaa96a593b863edb8bb48d2f344c0534c4b429b2998b041be36479dc6c5c4"}